2025-06-02 16:51:21.790286 | Job console starting 2025-06-02 16:51:21.801999 | Updating git repos 2025-06-02 16:51:21.880076 | Cloning repos into workspace 2025-06-02 16:51:22.144763 | Restoring repo states 2025-06-02 16:51:22.158659 | Merging changes 2025-06-02 16:51:22.158677 | Checking out repos 2025-06-02 16:51:22.485504 | Preparing playbooks 2025-06-02 16:51:23.216613 | Running Ansible setup 2025-06-02 16:51:27.455864 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-02 16:51:28.281637 | 2025-06-02 16:51:28.281868 | PLAY [Base pre] 2025-06-02 16:51:28.304953 | 2025-06-02 16:51:28.305107 | TASK [Setup log path fact] 2025-06-02 16:51:28.328214 | orchestrator | ok 2025-06-02 16:51:28.347677 | 2025-06-02 16:51:28.347827 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 16:51:28.394515 | orchestrator | ok 2025-06-02 16:51:28.408522 | 2025-06-02 16:51:28.408652 | TASK [emit-job-header : Print job information] 2025-06-02 16:51:28.450657 | # Job Information 2025-06-02 16:51:28.450932 | Ansible Version: 2.16.14 2025-06-02 16:51:28.450977 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-06-02 16:51:28.451018 | Pipeline: post 2025-06-02 16:51:28.451047 | Executor: 521e9411259a 2025-06-02 16:51:28.451072 | Triggered by: https://github.com/osism/testbed/commit/887b41f5cd4fd4903028405821376cedcc5ffa4a 2025-06-02 16:51:28.451099 | Event ID: cbb70308-3fd1-11f0-9e38-1687f67235b8 2025-06-02 16:51:28.458540 | 2025-06-02 16:51:28.458673 | LOOP [emit-job-header : Print node information] 2025-06-02 16:51:28.598294 | orchestrator | ok: 2025-06-02 16:51:28.598507 | orchestrator | # Node Information 2025-06-02 16:51:28.598542 | orchestrator | Inventory Hostname: orchestrator 2025-06-02 16:51:28.598568 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-02 16:51:28.598591 | orchestrator | Username: zuul-testbed01 2025-06-02 16:51:28.598612 | orchestrator | Distro: Debian 12.11 2025-06-02 16:51:28.598635 | orchestrator | Provider: static-testbed 2025-06-02 16:51:28.598656 | orchestrator | Region: 2025-06-02 16:51:28.598677 | orchestrator | Label: testbed-orchestrator 2025-06-02 16:51:28.598698 | orchestrator | Product Name: OpenStack Nova 2025-06-02 16:51:28.598718 | orchestrator | Interface IP: 81.163.193.140 2025-06-02 16:51:28.617627 | 2025-06-02 16:51:28.617766 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-02 16:51:29.149333 | orchestrator -> localhost | changed 2025-06-02 16:51:29.165864 | 2025-06-02 16:51:29.166087 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-02 16:51:30.287434 | orchestrator -> localhost | changed 2025-06-02 16:51:30.302193 | 2025-06-02 16:51:30.302328 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-02 16:51:30.570826 | orchestrator -> localhost | ok 2025-06-02 16:51:30.587374 | 2025-06-02 16:51:30.587576 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-02 16:51:30.623775 | orchestrator | ok 2025-06-02 16:51:30.642561 | orchestrator | included: /var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-02 16:51:30.650757 | 2025-06-02 16:51:30.650939 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-02 16:51:32.031931 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-02 16:51:32.032441 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/c5c8a8042d63426182240941ef017861_id_rsa 2025-06-02 16:51:32.032550 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/c5c8a8042d63426182240941ef017861_id_rsa.pub 2025-06-02 16:51:32.032627 | orchestrator -> localhost | The key fingerprint is: 2025-06-02 16:51:32.032707 | orchestrator -> localhost | SHA256:Ifygw8ozRRP5dnBlKfbQqw4nUrnEBCfpsup0QuDfxy8 zuul-build-sshkey 2025-06-02 16:51:32.032773 | orchestrator -> localhost | The key's randomart image is: 2025-06-02 16:51:32.032888 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-02 16:51:32.032958 | orchestrator -> localhost | | ++. oo. | 2025-06-02 16:51:32.033024 | orchestrator -> localhost | | o=o =.o | 2025-06-02 16:51:32.033083 | orchestrator -> localhost | |. .o++=.+ . | 2025-06-02 16:51:32.033142 | orchestrator -> localhost | |o .o.oB+..o | 2025-06-02 16:51:32.033201 | orchestrator -> localhost | | o o=+ oS. | 2025-06-02 16:51:32.033264 | orchestrator -> localhost | |..ooo.= o | 2025-06-02 16:51:32.033323 | orchestrator -> localhost | | +=o o B | 2025-06-02 16:51:32.033380 | orchestrator -> localhost | |o oo .Eo | 2025-06-02 16:51:32.033460 | orchestrator -> localhost | |.. .. | 2025-06-02 16:51:32.033561 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-02 16:51:32.033738 | orchestrator -> localhost | ok: Runtime: 0:00:00.855045 2025-06-02 16:51:32.050222 | 2025-06-02 16:51:32.050394 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-02 16:51:32.087446 | orchestrator | ok 2025-06-02 16:51:32.101068 | orchestrator | included: /var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-02 16:51:32.110737 | 2025-06-02 16:51:32.110918 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-02 16:51:32.135164 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:32.143758 | 2025-06-02 16:51:32.143915 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-02 16:51:32.744736 | orchestrator | changed 2025-06-02 16:51:32.753319 | 2025-06-02 16:51:32.753451 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-02 16:51:33.035773 | orchestrator | ok 2025-06-02 16:51:33.046699 | 2025-06-02 16:51:33.046910 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-02 16:51:33.468362 | orchestrator | ok 2025-06-02 16:51:33.475594 | 2025-06-02 16:51:33.475712 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-02 16:51:33.906515 | orchestrator | ok 2025-06-02 16:51:33.916066 | 2025-06-02 16:51:33.916216 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-02 16:51:33.952297 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:33.965234 | 2025-06-02 16:51:33.965404 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-02 16:51:34.418905 | orchestrator -> localhost | changed 2025-06-02 16:51:34.433066 | 2025-06-02 16:51:34.433186 | TASK [add-build-sshkey : Add back temp key] 2025-06-02 16:51:34.794709 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/c5c8a8042d63426182240941ef017861_id_rsa (zuul-build-sshkey) 2025-06-02 16:51:34.795367 | orchestrator -> localhost | ok: Runtime: 0:00:00.018903 2025-06-02 16:51:34.811196 | 2025-06-02 16:51:34.811353 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-02 16:51:35.252402 | orchestrator | ok 2025-06-02 16:51:35.260538 | 2025-06-02 16:51:35.260674 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-02 16:51:35.285729 | orchestrator | skipping: Conditional result was False 2025-06-02 16:51:35.347123 | 2025-06-02 16:51:35.347257 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-02 16:51:35.768513 | orchestrator | ok 2025-06-02 16:51:35.785532 | 2025-06-02 16:51:35.785681 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-02 16:51:35.833576 | orchestrator | ok 2025-06-02 16:51:35.844518 | 2025-06-02 16:51:35.844651 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-02 16:51:36.145092 | orchestrator -> localhost | ok 2025-06-02 16:51:36.160427 | 2025-06-02 16:51:36.160592 | TASK [validate-host : Collect information about the host] 2025-06-02 16:51:37.374127 | orchestrator | ok 2025-06-02 16:51:37.389730 | 2025-06-02 16:51:37.389876 | TASK [validate-host : Sanitize hostname] 2025-06-02 16:51:37.456619 | orchestrator | ok 2025-06-02 16:51:37.465226 | 2025-06-02 16:51:37.465397 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-02 16:51:38.036076 | orchestrator -> localhost | changed 2025-06-02 16:51:38.042798 | 2025-06-02 16:51:38.042971 | TASK [validate-host : Collect information about zuul worker] 2025-06-02 16:51:38.496681 | orchestrator | ok 2025-06-02 16:51:38.502299 | 2025-06-02 16:51:38.502424 | TASK [validate-host : Write out all zuul information for each host] 2025-06-02 16:51:39.082967 | orchestrator -> localhost | changed 2025-06-02 16:51:39.096779 | 2025-06-02 16:51:39.096929 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-02 16:51:39.378944 | orchestrator | ok 2025-06-02 16:51:39.389276 | 2025-06-02 16:51:39.389431 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-02 16:52:18.034421 | orchestrator | changed: 2025-06-02 16:52:18.034758 | orchestrator | .d..t...... src/ 2025-06-02 16:52:18.034908 | orchestrator | .d..t...... src/github.com/ 2025-06-02 16:52:18.034961 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-02 16:52:18.034999 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-02 16:52:18.035034 | orchestrator | RedHat.yml 2025-06-02 16:52:18.048181 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-02 16:52:18.048198 | orchestrator | RedHat.yml 2025-06-02 16:52:18.048251 | orchestrator | = 1.53.0"... 2025-06-02 16:52:32.209875 | orchestrator | 16:52:32.209 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-06-02 16:52:33.289723 | orchestrator | 16:52:33.285 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-02 16:52:34.261011 | orchestrator | 16:52:34.260 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 16:52:35.486290 | orchestrator | 16:52:35.485 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-06-02 16:52:36.482895 | orchestrator | 16:52:36.481 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-06-02 16:52:37.377849 | orchestrator | 16:52:37.377 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-02 16:52:38.214981 | orchestrator | 16:52:38.214 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-02 16:52:38.215094 | orchestrator | 16:52:38.214 STDOUT terraform: Providers are signed by their developers. 2025-06-02 16:52:38.215113 | orchestrator | 16:52:38.214 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-02 16:52:38.215125 | orchestrator | 16:52:38.214 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-02 16:52:38.215137 | orchestrator | 16:52:38.215 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-02 16:52:38.215161 | orchestrator | 16:52:38.215 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-02 16:52:38.215182 | orchestrator | 16:52:38.215 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-02 16:52:38.215196 | orchestrator | 16:52:38.215 STDOUT terraform: you run "tofu init" in the future. 2025-06-02 16:52:38.215734 | orchestrator | 16:52:38.215 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-02 16:52:38.215823 | orchestrator | 16:52:38.215 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-02 16:52:38.215862 | orchestrator | 16:52:38.215 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-02 16:52:38.215879 | orchestrator | 16:52:38.215 STDOUT terraform: should now work. 2025-06-02 16:52:38.215926 | orchestrator | 16:52:38.215 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-02 16:52:38.215983 | orchestrator | 16:52:38.215 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-02 16:52:38.216027 | orchestrator | 16:52:38.215 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-02 16:52:38.451146 | orchestrator | 16:52:38.450 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 16:52:38.651503 | orchestrator | 16:52:38.651 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-02 16:52:38.651581 | orchestrator | 16:52:38.651 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-02 16:52:38.651589 | orchestrator | 16:52:38.651 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-02 16:52:38.651594 | orchestrator | 16:52:38.651 STDOUT terraform: for this configuration. 2025-06-02 16:52:38.856811 | orchestrator | 16:52:38.856 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 16:52:38.975840 | orchestrator | 16:52:38.975 STDOUT terraform: ci.auto.tfvars 2025-06-02 16:52:38.975912 | orchestrator | 16:52:38.975 STDOUT terraform: default_custom.tf 2025-06-02 16:52:39.299787 | orchestrator | 16:52:39.298 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-06-02 16:52:40.253476 | orchestrator | 16:52:40.246 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-02 16:52:40.785468 | orchestrator | 16:52:40.785 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-02 16:52:41.006836 | orchestrator | 16:52:41.006 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-02 16:52:41.006917 | orchestrator | 16:52:41.006 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-02 16:52:41.006930 | orchestrator | 16:52:41.006 STDOUT terraform:  + create 2025-06-02 16:52:41.006942 | orchestrator | 16:52:41.006 STDOUT terraform:  <= read (data resources) 2025-06-02 16:52:41.007000 | orchestrator | 16:52:41.006 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-02 16:52:41.007096 | orchestrator | 16:52:41.007 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-02 16:52:41.007146 | orchestrator | 16:52:41.007 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 16:52:41.007197 | orchestrator | 16:52:41.007 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-02 16:52:41.007293 | orchestrator | 16:52:41.007 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 16:52:41.007346 | orchestrator | 16:52:41.007 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 16:52:41.007398 | orchestrator | 16:52:41.007 STDOUT terraform:  + file = (known after apply) 2025-06-02 16:52:41.007446 | orchestrator | 16:52:41.007 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.007498 | orchestrator | 16:52:41.007 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.007567 | orchestrator | 16:52:41.007 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 16:52:41.007617 | orchestrator | 16:52:41.007 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 16:52:41.007653 | orchestrator | 16:52:41.007 STDOUT terraform:  + most_recent = true 2025-06-02 16:52:41.007742 | orchestrator | 16:52:41.007 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.007827 | orchestrator | 16:52:41.007 STDOUT terraform:  + protected = (known after apply) 2025-06-02 16:52:41.007877 | orchestrator | 16:52:41.007 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.007927 | orchestrator | 16:52:41.007 STDOUT terraform:  + schema = (known after apply) 2025-06-02 16:52:41.007989 | orchestrator | 16:52:41.007 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 16:52:41.008049 | orchestrator | 16:52:41.007 STDOUT terraform:  + tags = (known after apply) 2025-06-02 16:52:41.008102 | orchestrator | 16:52:41.008 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 16:52:41.008157 | orchestrator | 16:52:41.008 STDOUT terraform:  } 2025-06-02 16:52:41.008358 | orchestrator | 16:52:41.008 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-02 16:52:41.008435 | orchestrator | 16:52:41.008 STDOUT terraform:  # (config refers to values not yet known) 2025-06-02 16:52:41.008509 | orchestrator | 16:52:41.008 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-02 16:52:41.008563 | orchestrator | 16:52:41.008 STDOUT terraform:  + checksum = (known after apply) 2025-06-02 16:52:41.008617 | orchestrator | 16:52:41.008 STDOUT terraform:  + created_at = (known after apply) 2025-06-02 16:52:41.008674 | orchestrator | 16:52:41.008 STDOUT terraform:  + file = (known after apply) 2025-06-02 16:52:41.008731 | orchestrator | 16:52:41.008 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.008784 | orchestrator | 16:52:41.008 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.008841 | orchestrator | 16:52:41.008 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-02 16:52:41.008897 | orchestrator | 16:52:41.008 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-02 16:52:41.008943 | orchestrator | 16:52:41.008 STDOUT terraform:  + most_recent = true 2025-06-02 16:52:41.008999 | orchestrator | 16:52:41.008 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.009054 | orchestrator | 16:52:41.008 STDOUT terraform:  + protected = (known after apply) 2025-06-02 16:52:41.009109 | orchestrator | 16:52:41.009 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.009163 | orchestrator | 16:52:41.009 STDOUT terraform:  + schema = (known after apply) 2025-06-02 16:52:41.009226 | orchestrator | 16:52:41.009 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-02 16:52:41.009355 | orchestrator | 16:52:41.009 STDOUT terraform:  + tags = (known after apply) 2025-06-02 16:52:41.009426 | orchestrator | 16:52:41.009 STDOUT terraform:  + updated_at = (known after apply) 2025-06-02 16:52:41.009441 | orchestrator | 16:52:41.009 STDOUT terraform:  } 2025-06-02 16:52:41.009508 | orchestrator | 16:52:41.009 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-02 16:52:41.009567 | orchestrator | 16:52:41.009 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-02 16:52:41.009638 | orchestrator | 16:52:41.009 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:41.009704 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:41.009798 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:41.009838 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:41.009888 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:41.009954 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:41.010049 | orchestrator | 16:52:41.009 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:41.010105 | orchestrator | 16:52:41.010 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:41.010152 | orchestrator | 16:52:41.010 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:41.010223 | orchestrator | 16:52:41.010 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-02 16:52:41.010345 | orchestrator | 16:52:41.010 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.010362 | orchestrator | 16:52:41.010 STDOUT terraform:  } 2025-06-02 16:52:41.010427 | orchestrator | 16:52:41.010 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-02 16:52:41.010481 | orchestrator | 16:52:41.010 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-02 16:52:41.010550 | orchestrator | 16:52:41.010 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:41.010637 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:41.010702 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:41.010770 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:41.010838 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:41.010908 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:41.010976 | orchestrator | 16:52:41.010 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:41.011020 | orchestrator | 16:52:41.010 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:41.011067 | orchestrator | 16:52:41.011 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:41.011127 | orchestrator | 16:52:41.011 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-02 16:52:41.011195 | orchestrator | 16:52:41.011 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.011208 | orchestrator | 16:52:41.011 STDOUT terraform:  } 2025-06-02 16:52:41.011277 | orchestrator | 16:52:41.011 STDOUT terraform:  # local_file.inventory will be created 2025-06-02 16:52:41.011323 | orchestrator | 16:52:41.011 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-02 16:52:41.011392 | orchestrator | 16:52:41.011 STDOUT terraform:  + content = (known after apply) 2025-06-02 16:52:41.011457 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:41.011523 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:41.011596 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:41.011667 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:41.011743 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:41.011820 | orchestrator | 16:52:41.011 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:41.011836 | orchestrator | 16:52:41.011 STDOUT terraform:  + directory_permission = "0777" 2025-06-02 16:52:41.011887 | orchestrator | 16:52:41.011 STDOUT terraform:  + file_permission = "0644" 2025-06-02 16:52:41.011944 | orchestrator | 16:52:41.011 STDOUT terraform:  + filename = "inventory.ci" 2025-06-02 16:52:41.012014 | orchestrator | 16:52:41.011 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.012029 | orchestrator | 16:52:41.012 STDOUT terraform:  } 2025-06-02 16:52:41.012124 | orchestrator | 16:52:41.012 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-02 16:52:41.012182 | orchestrator | 16:52:41.012 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-02 16:52:41.012241 | orchestrator | 16:52:41.012 STDOUT terraform:  + content = (sensitive value) 2025-06-02 16:52:41.012318 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-02 16:52:41.012405 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-02 16:52:41.012471 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-02 16:52:41.012537 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-02 16:52:41.012603 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-02 16:52:41.012668 | orchestrator | 16:52:41.012 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-02 16:52:41.012714 | orchestrator | 16:52:41.012 STDOUT terraform:  + directory_permission = "0700" 2025-06-02 16:52:41.012797 | orchestrator | 16:52:41.012 STDOUT terraform:  + file_permission = "0600" 2025-06-02 16:52:41.012858 | orchestrator | 16:52:41.012 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-02 16:52:41.012930 | orchestrator | 16:52:41.012 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.012944 | orchestrator | 16:52:41.012 STDOUT terraform:  } 2025-06-02 16:52:41.013027 | orchestrator | 16:52:41.012 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-02 16:52:41.013085 | orchestrator | 16:52:41.013 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-02 16:52:41.013125 | orchestrator | 16:52:41.013 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.013139 | orchestrator | 16:52:41.013 STDOUT terraform:  } 2025-06-02 16:52:41.013235 | orchestrator | 16:52:41.013 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-02 16:52:41.013362 | orchestrator | 16:52:41.013 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-02 16:52:41.013429 | orchestrator | 16:52:41.013 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.013478 | orchestrator | 16:52:41.013 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.013549 | orchestrator | 16:52:41.013 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.013617 | orchestrator | 16:52:41.013 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.013682 | orchestrator | 16:52:41.013 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.013766 | orchestrator | 16:52:41.013 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-02 16:52:41.013842 | orchestrator | 16:52:41.013 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.013873 | orchestrator | 16:52:41.013 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.013919 | orchestrator | 16:52:41.013 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.013965 | orchestrator | 16:52:41.013 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.013979 | orchestrator | 16:52:41.013 STDOUT terraform:  } 2025-06-02 16:52:41.014426 | orchestrator | 16:52:41.013 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-02 16:52:41.014523 | orchestrator | 16:52:41.014 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.014596 | orchestrator | 16:52:41.014 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.014642 | orchestrator | 16:52:41.014 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.014712 | orchestrator | 16:52:41.014 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.014780 | orchestrator | 16:52:41.014 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.014892 | orchestrator | 16:52:41.014 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.014944 | orchestrator | 16:52:41.014 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-02 16:52:41.015062 | orchestrator | 16:52:41.014 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.015141 | orchestrator | 16:52:41.015 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.015188 | orchestrator | 16:52:41.015 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.015240 | orchestrator | 16:52:41.015 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.015271 | orchestrator | 16:52:41.015 STDOUT terraform:  } 2025-06-02 16:52:41.015363 | orchestrator | 16:52:41.015 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-02 16:52:41.015453 | orchestrator | 16:52:41.015 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.015524 | orchestrator | 16:52:41.015 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.015565 | orchestrator | 16:52:41.015 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.015636 | orchestrator | 16:52:41.015 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.015708 | orchestrator | 16:52:41.015 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.015777 | orchestrator | 16:52:41.015 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.015859 | orchestrator | 16:52:41.015 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-02 16:52:41.015928 | orchestrator | 16:52:41.015 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.015986 | orchestrator | 16:52:41.015 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.016001 | orchestrator | 16:52:41.015 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.016054 | orchestrator | 16:52:41.015 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.016067 | orchestrator | 16:52:41.016 STDOUT terraform:  } 2025-06-02 16:52:41.016162 | orchestrator | 16:52:41.016 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-02 16:52:41.016263 | orchestrator | 16:52:41.016 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.016361 | orchestrator | 16:52:41.016 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.016409 | orchestrator | 16:52:41.016 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.016477 | orchestrator | 16:52:41.016 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.016544 | orchestrator | 16:52:41.016 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.016614 | orchestrator | 16:52:41.016 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.016717 | orchestrator | 16:52:41.016 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-02 16:52:41.021003 | orchestrator | 16:52:41.020 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.021047 | orchestrator | 16:52:41.020 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.021056 | orchestrator | 16:52:41.021 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.021099 | orchestrator | 16:52:41.021 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.021111 | orchestrator | 16:52:41.021 STDOUT terraform:  } 2025-06-02 16:52:41.021191 | orchestrator | 16:52:41.021 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-02 16:52:41.021284 | orchestrator | 16:52:41.021 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.021343 | orchestrator | 16:52:41.021 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.021386 | orchestrator | 16:52:41.021 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.021447 | orchestrator | 16:52:41.021 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.021507 | orchestrator | 16:52:41.021 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.021564 | orchestrator | 16:52:41.021 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.021637 | orchestrator | 16:52:41.021 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-02 16:52:41.021696 | orchestrator | 16:52:41.021 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.021732 | orchestrator | 16:52:41.021 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.021773 | orchestrator | 16:52:41.021 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.021913 | orchestrator | 16:52:41.021 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.021946 | orchestrator | 16:52:41.021 STDOUT terraform:  } 2025-06-02 16:52:41.022066 | orchestrator | 16:52:41.021 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-02 16:52:41.022122 | orchestrator | 16:52:41.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.022181 | orchestrator | 16:52:41.022 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.022221 | orchestrator | 16:52:41.022 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.022308 | orchestrator | 16:52:41.022 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.022376 | orchestrator | 16:52:41.022 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.022436 | orchestrator | 16:52:41.022 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.022511 | orchestrator | 16:52:41.022 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-02 16:52:41.022570 | orchestrator | 16:52:41.022 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.022607 | orchestrator | 16:52:41.022 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.022654 | orchestrator | 16:52:41.022 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.022687 | orchestrator | 16:52:41.022 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.022697 | orchestrator | 16:52:41.022 STDOUT terraform:  } 2025-06-02 16:52:41.022830 | orchestrator | 16:52:41.022 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-02 16:52:41.022908 | orchestrator | 16:52:41.022 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-02 16:52:41.022971 | orchestrator | 16:52:41.022 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.023011 | orchestrator | 16:52:41.022 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.023072 | orchestrator | 16:52:41.023 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.023131 | orchestrator | 16:52:41.023 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.023192 | orchestrator | 16:52:41.023 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.023308 | orchestrator | 16:52:41.023 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-02 16:52:41.023369 | orchestrator | 16:52:41.023 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.023405 | orchestrator | 16:52:41.023 STDOUT terraform:  + size = 80 2025-06-02 16:52:41.023448 | orchestrator | 16:52:41.023 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.023490 | orchestrator | 16:52:41.023 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.023500 | orchestrator | 16:52:41.023 STDOUT terraform:  } 2025-06-02 16:52:41.023579 | orchestrator | 16:52:41.023 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-02 16:52:41.023653 | orchestrator | 16:52:41.023 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.023712 | orchestrator | 16:52:41.023 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.023751 | orchestrator | 16:52:41.023 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.023814 | orchestrator | 16:52:41.023 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.023867 | orchestrator | 16:52:41.023 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.023934 | orchestrator | 16:52:41.023 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-02 16:52:41.023977 | orchestrator | 16:52:41.023 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.024008 | orchestrator | 16:52:41.023 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.024044 | orchestrator | 16:52:41.024 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.024080 | orchestrator | 16:52:41.024 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.024091 | orchestrator | 16:52:41.024 STDOUT terraform:  } 2025-06-02 16:52:41.024165 | orchestrator | 16:52:41.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-02 16:52:41.024237 | orchestrator | 16:52:41.024 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.024299 | orchestrator | 16:52:41.024 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.024332 | orchestrator | 16:52:41.024 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.024383 | orchestrator | 16:52:41.024 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.024436 | orchestrator | 16:52:41.024 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.024492 | orchestrator | 16:52:41.024 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-02 16:52:41.024543 | orchestrator | 16:52:41.024 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.024573 | orchestrator | 16:52:41.024 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.024608 | orchestrator | 16:52:41.024 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.024645 | orchestrator | 16:52:41.024 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.024654 | orchestrator | 16:52:41.024 STDOUT terraform:  } 2025-06-02 16:52:41.024723 | orchestrator | 16:52:41.024 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-02 16:52:41.024796 | orchestrator | 16:52:41.024 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.024848 | orchestrator | 16:52:41.024 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.024883 | orchestrator | 16:52:41.024 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.024940 | orchestrator | 16:52:41.024 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.024992 | orchestrator | 16:52:41.024 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.025049 | orchestrator | 16:52:41.024 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-02 16:52:41.025100 | orchestrator | 16:52:41.025 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.025167 | orchestrator | 16:52:41.025 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.025176 | orchestrator | 16:52:41.025 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.025184 | orchestrator | 16:52:41.025 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.025211 | orchestrator | 16:52:41.025 STDOUT terraform:  } 2025-06-02 16:52:41.025319 | orchestrator | 16:52:41.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-02 16:52:41.025381 | orchestrator | 16:52:41.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.025431 | orchestrator | 16:52:41.025 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.025467 | orchestrator | 16:52:41.025 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.025520 | orchestrator | 16:52:41.025 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.025572 | orchestrator | 16:52:41.025 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.025629 | orchestrator | 16:52:41.025 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-02 16:52:41.025708 | orchestrator | 16:52:41.025 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.025739 | orchestrator | 16:52:41.025 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.025783 | orchestrator | 16:52:41.025 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.025814 | orchestrator | 16:52:41.025 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.025822 | orchestrator | 16:52:41.025 STDOUT terraform:  } 2025-06-02 16:52:41.025886 | orchestrator | 16:52:41.025 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-02 16:52:41.025941 | orchestrator | 16:52:41.025 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.025989 | orchestrator | 16:52:41.025 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.026053 | orchestrator | 16:52:41.025 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.026285 | orchestrator | 16:52:41.026 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.026349 | orchestrator | 16:52:41.026 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.026431 | orchestrator | 16:52:41.026 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-02 16:52:41.026481 | orchestrator | 16:52:41.026 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.026512 | orchestrator | 16:52:41.026 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.026549 | orchestrator | 16:52:41.026 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.026587 | orchestrator | 16:52:41.026 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.026607 | orchestrator | 16:52:41.026 STDOUT terraform:  } 2025-06-02 16:52:41.026661 | orchestrator | 16:52:41.026 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-02 16:52:41.026773 | orchestrator | 16:52:41.026 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.026785 | orchestrator | 16:52:41.026 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.026824 | orchestrator | 16:52:41.026 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.026872 | orchestrator | 16:52:41.026 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.026923 | orchestrator | 16:52:41.026 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.026967 | orchestrator | 16:52:41.026 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-02 16:52:41.027017 | orchestrator | 16:52:41.026 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.027046 | orchestrator | 16:52:41.027 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.027078 | orchestrator | 16:52:41.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.027111 | orchestrator | 16:52:41.027 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.027119 | orchestrator | 16:52:41.027 STDOUT terraform:  } 2025-06-02 16:52:41.027196 | orchestrator | 16:52:41.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-02 16:52:41.027241 | orchestrator | 16:52:41.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.027304 | orchestrator | 16:52:41.027 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.027339 | orchestrator | 16:52:41.027 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.027383 | orchestrator | 16:52:41.027 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.027430 | orchestrator | 16:52:41.027 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.027479 | orchestrator | 16:52:41.027 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-02 16:52:41.027530 | orchestrator | 16:52:41.027 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.027558 | orchestrator | 16:52:41.027 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.027602 | orchestrator | 16:52:41.027 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.027625 | orchestrator | 16:52:41.027 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.027633 | orchestrator | 16:52:41.027 STDOUT terraform:  } 2025-06-02 16:52:41.027694 | orchestrator | 16:52:41.027 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-02 16:52:41.027751 | orchestrator | 16:52:41.027 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.027798 | orchestrator | 16:52:41.027 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.027828 | orchestrator | 16:52:41.027 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.027877 | orchestrator | 16:52:41.027 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.027923 | orchestrator | 16:52:41.027 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.027973 | orchestrator | 16:52:41.027 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-02 16:52:41.028018 | orchestrator | 16:52:41.027 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.028046 | orchestrator | 16:52:41.028 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.028079 | orchestrator | 16:52:41.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.028108 | orchestrator | 16:52:41.028 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.028115 | orchestrator | 16:52:41.028 STDOUT terraform:  } 2025-06-02 16:52:41.028179 | orchestrator | 16:52:41.028 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-02 16:52:41.028235 | orchestrator | 16:52:41.028 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-02 16:52:41.028314 | orchestrator | 16:52:41.028 STDOUT terraform:  + attachment = (known after apply) 2025-06-02 16:52:41.028345 | orchestrator | 16:52:41.028 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.028394 | orchestrator | 16:52:41.028 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.028440 | orchestrator | 16:52:41.028 STDOUT terraform:  + metadata = (known after apply) 2025-06-02 16:52:41.028492 | orchestrator | 16:52:41.028 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-02 16:52:41.028538 | orchestrator | 16:52:41.028 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.028569 | orchestrator | 16:52:41.028 STDOUT terraform:  + size = 20 2025-06-02 16:52:41.028600 | orchestrator | 16:52:41.028 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-02 16:52:41.028632 | orchestrator | 16:52:41.028 STDOUT terraform:  + volume_type = "ssd" 2025-06-02 16:52:41.028640 | orchestrator | 16:52:41.028 STDOUT terraform:  } 2025-06-02 16:52:41.028760 | orchestrator | 16:52:41.028 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-02 16:52:41.028818 | orchestrator | 16:52:41.028 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-02 16:52:41.028860 | orchestrator | 16:52:41.028 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.028903 | orchestrator | 16:52:41.028 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.028947 | orchestrator | 16:52:41.028 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.028989 | orchestrator | 16:52:41.028 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.029030 | orchestrator | 16:52:41.028 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.029062 | orchestrator | 16:52:41.029 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.029106 | orchestrator | 16:52:41.029 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.029149 | orchestrator | 16:52:41.029 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.029187 | orchestrator | 16:52:41.029 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-02 16:52:41.029217 | orchestrator | 16:52:41.029 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.029295 | orchestrator | 16:52:41.029 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.029305 | orchestrator | 16:52:41.029 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.029358 | orchestrator | 16:52:41.029 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.029395 | orchestrator | 16:52:41.029 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.029426 | orchestrator | 16:52:41.029 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.029463 | orchestrator | 16:52:41.029 STDOUT terraform:  + name = "testbed-manager" 2025-06-02 16:52:41.029494 | orchestrator | 16:52:41.029 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.029537 | orchestrator | 16:52:41.029 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.029579 | orchestrator | 16:52:41.029 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.029608 | orchestrator | 16:52:41.029 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.029653 | orchestrator | 16:52:41.029 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.029697 | orchestrator | 16:52:41.029 STDOUT terraform:  + user_data = (known after apply) 2025-06-02 16:52:41.029720 | orchestrator | 16:52:41.029 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.029750 | orchestrator | 16:52:41.029 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.029784 | orchestrator | 16:52:41.029 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.029819 | orchestrator | 16:52:41.029 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.029855 | orchestrator | 16:52:41.029 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.029893 | orchestrator | 16:52:41.029 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.029939 | orchestrator | 16:52:41.029 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.029950 | orchestrator | 16:52:41.029 STDOUT terraform:  } 2025-06-02 16:52:41.029987 | orchestrator | 16:52:41.029 STDOUT terraform:  + network { 2025-06-02 16:52:41.030013 | orchestrator | 16:52:41.029 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.030086 | orchestrator | 16:52:41.030 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.030124 | orchestrator | 16:52:41.030 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.030164 | orchestrator | 16:52:41.030 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.030202 | orchestrator | 16:52:41.030 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.030241 | orchestrator | 16:52:41.030 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.030385 | orchestrator | 16:52:41.030 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.030403 | orchestrator | 16:52:41.030 STDOUT terraform:  } 2025-06-02 16:52:41.030408 | orchestrator | 16:52:41.030 STDOUT terraform:  } 2025-06-02 16:52:41.030458 | orchestrator | 16:52:41.030 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-02 16:52:41.030512 | orchestrator | 16:52:41.030 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.030557 | orchestrator | 16:52:41.030 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.030601 | orchestrator | 16:52:41.030 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.030644 | orchestrator | 16:52:41.030 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.030688 | orchestrator | 16:52:41.030 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.030720 | orchestrator | 16:52:41.030 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.030746 | orchestrator | 16:52:41.030 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.030788 | orchestrator | 16:52:41.030 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.030831 | orchestrator | 16:52:41.030 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.030865 | orchestrator | 16:52:41.030 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.030895 | orchestrator | 16:52:41.030 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.030932 | orchestrator | 16:52:41.030 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.030973 | orchestrator | 16:52:41.030 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.031014 | orchestrator | 16:52:41.030 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.031054 | orchestrator | 16:52:41.031 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.031082 | orchestrator | 16:52:41.031 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.031117 | orchestrator | 16:52:41.031 STDOUT terraform:  + name = "testbed-node-0" 2025-06-02 16:52:41.031144 | orchestrator | 16:52:41.031 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.031191 | orchestrator | 16:52:41.031 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.031222 | orchestrator | 16:52:41.031 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.031280 | orchestrator | 16:52:41.031 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.031307 | orchestrator | 16:52:41.031 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.031364 | orchestrator | 16:52:41.031 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.031372 | orchestrator | 16:52:41.031 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.031406 | orchestrator | 16:52:41.031 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.031437 | orchestrator | 16:52:41.031 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.031471 | orchestrator | 16:52:41.031 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.031503 | orchestrator | 16:52:41.031 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.031537 | orchestrator | 16:52:41.031 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.031580 | orchestrator | 16:52:41.031 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.031588 | orchestrator | 16:52:41.031 STDOUT terraform:  } 2025-06-02 16:52:41.031612 | orchestrator | 16:52:41.031 STDOUT terraform:  + network { 2025-06-02 16:52:41.031636 | orchestrator | 16:52:41.031 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.031672 | orchestrator | 16:52:41.031 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.031704 | orchestrator | 16:52:41.031 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.031740 | orchestrator | 16:52:41.031 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.031781 | orchestrator | 16:52:41.031 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.031820 | orchestrator | 16:52:41.031 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.031855 | orchestrator | 16:52:41.031 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.031862 | orchestrator | 16:52:41.031 STDOUT terraform:  } 2025-06-02 16:52:41.031885 | orchestrator | 16:52:41.031 STDOUT terraform:  } 2025-06-02 16:52:41.031932 | orchestrator | 16:52:41.031 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-02 16:52:41.031978 | orchestrator | 16:52:41.031 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.032018 | orchestrator | 16:52:41.031 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.032056 | orchestrator | 16:52:41.032 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.032095 | orchestrator | 16:52:41.032 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.032137 | orchestrator | 16:52:41.032 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.032164 | orchestrator | 16:52:41.032 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.032189 | orchestrator | 16:52:41.032 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.032225 | orchestrator | 16:52:41.032 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.032281 | orchestrator | 16:52:41.032 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.032314 | orchestrator | 16:52:41.032 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.032341 | orchestrator | 16:52:41.032 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.032381 | orchestrator | 16:52:41.032 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.032419 | orchestrator | 16:52:41.032 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.032459 | orchestrator | 16:52:41.032 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.032498 | orchestrator | 16:52:41.032 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.032526 | orchestrator | 16:52:41.032 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.032563 | orchestrator | 16:52:41.032 STDOUT terraform:  + name = "testbed-node-1" 2025-06-02 16:52:41.032591 | orchestrator | 16:52:41.032 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.032629 | orchestrator | 16:52:41.032 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.032669 | orchestrator | 16:52:41.032 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.032696 | orchestrator | 16:52:41.032 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.032737 | orchestrator | 16:52:41.032 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.032792 | orchestrator | 16:52:41.032 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.032800 | orchestrator | 16:52:41.032 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.032837 | orchestrator | 16:52:41.032 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.032866 | orchestrator | 16:52:41.032 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.032900 | orchestrator | 16:52:41.032 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.032931 | orchestrator | 16:52:41.032 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.032971 | orchestrator | 16:52:41.032 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.033008 | orchestrator | 16:52:41.032 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.033015 | orchestrator | 16:52:41.033 STDOUT terraform:  } 2025-06-02 16:52:41.033038 | orchestrator | 16:52:41.033 STDOUT terraform:  + network { 2025-06-02 16:52:41.033062 | orchestrator | 16:52:41.033 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.033097 | orchestrator | 16:52:41.033 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.033132 | orchestrator | 16:52:41.033 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.033167 | orchestrator | 16:52:41.033 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.033210 | orchestrator | 16:52:41.033 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.033247 | orchestrator | 16:52:41.033 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.033296 | orchestrator | 16:52:41.033 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.033303 | orchestrator | 16:52:41.033 STDOUT terraform:  } 2025-06-02 16:52:41.033311 | orchestrator | 16:52:41.033 STDOUT terraform:  } 2025-06-02 16:52:41.033363 | orchestrator | 16:52:41.033 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-02 16:52:41.033411 | orchestrator | 16:52:41.033 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.033450 | orchestrator | 16:52:41.033 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.033490 | orchestrator | 16:52:41.033 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.033529 | orchestrator | 16:52:41.033 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.033569 | orchestrator | 16:52:41.033 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.033597 | orchestrator | 16:52:41.033 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.033620 | orchestrator | 16:52:41.033 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.033660 | orchestrator | 16:52:41.033 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.033699 | orchestrator | 16:52:41.033 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.033731 | orchestrator | 16:52:41.033 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.033760 | orchestrator | 16:52:41.033 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.033797 | orchestrator | 16:52:41.033 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.033836 | orchestrator | 16:52:41.033 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.033877 | orchestrator | 16:52:41.033 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.033917 | orchestrator | 16:52:41.033 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.033946 | orchestrator | 16:52:41.033 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.033981 | orchestrator | 16:52:41.033 STDOUT terraform:  + name = "testbed-node-2" 2025-06-02 16:52:41.034012 | orchestrator | 16:52:41.033 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.034065 | orchestrator | 16:52:41.034 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.034112 | orchestrator | 16:52:41.034 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.034139 | orchestrator | 16:52:41.034 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.034180 | orchestrator | 16:52:41.034 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.034233 | orchestrator | 16:52:41.034 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.034243 | orchestrator | 16:52:41.034 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.034314 | orchestrator | 16:52:41.034 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.034349 | orchestrator | 16:52:41.034 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.034387 | orchestrator | 16:52:41.034 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.034419 | orchestrator | 16:52:41.034 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.034453 | orchestrator | 16:52:41.034 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.034497 | orchestrator | 16:52:41.034 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.034506 | orchestrator | 16:52:41.034 STDOUT terraform:  } 2025-06-02 16:52:41.035536 | orchestrator | 16:52:41.034 STDOUT terraform:  + network { 2025-06-02 16:52:41.035563 | orchestrator | 16:52:41.035 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.035603 | orchestrator | 16:52:41.035 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.035651 | orchestrator | 16:52:41.035 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.035681 | orchestrator | 16:52:41.035 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.035718 | orchestrator | 16:52:41.035 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.035754 | orchestrator | 16:52:41.035 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.035790 | orchestrator | 16:52:41.035 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.035797 | orchestrator | 16:52:41.035 STDOUT terraform:  } 2025-06-02 16:52:41.035804 | orchestrator | 16:52:41.035 STDOUT terraform:  } 2025-06-02 16:52:41.035878 | orchestrator | 16:52:41.035 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-02 16:52:41.035923 | orchestrator | 16:52:41.035 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.035966 | orchestrator | 16:52:41.035 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.036007 | orchestrator | 16:52:41.035 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.036055 | orchestrator | 16:52:41.035 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.036096 | orchestrator | 16:52:41.036 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.036124 | orchestrator | 16:52:41.036 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.036146 | orchestrator | 16:52:41.036 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.036193 | orchestrator | 16:52:41.036 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.036238 | orchestrator | 16:52:41.036 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.036285 | orchestrator | 16:52:41.036 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.036318 | orchestrator | 16:52:41.036 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.036362 | orchestrator | 16:52:41.036 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.036406 | orchestrator | 16:52:41.036 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.036449 | orchestrator | 16:52:41.036 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.036493 | orchestrator | 16:52:41.036 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.036524 | orchestrator | 16:52:41.036 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.036562 | orchestrator | 16:52:41.036 STDOUT terraform:  + name = "testbed-node-3" 2025-06-02 16:52:41.036593 | orchestrator | 16:52:41.036 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.036635 | orchestrator | 16:52:41.036 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.036678 | orchestrator | 16:52:41.036 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.036708 | orchestrator | 16:52:41.036 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.036755 | orchestrator | 16:52:41.036 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.036814 | orchestrator | 16:52:41.036 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.036822 | orchestrator | 16:52:41.036 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.036860 | orchestrator | 16:52:41.036 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.036892 | orchestrator | 16:52:41.036 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.036930 | orchestrator | 16:52:41.036 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.036966 | orchestrator | 16:52:41.036 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.037003 | orchestrator | 16:52:41.036 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.037050 | orchestrator | 16:52:41.036 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.037057 | orchestrator | 16:52:41.037 STDOUT terraform:  } 2025-06-02 16:52:41.037078 | orchestrator | 16:52:41.037 STDOUT terraform:  + network { 2025-06-02 16:52:41.037122 | orchestrator | 16:52:41.037 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.037161 | orchestrator | 16:52:41.037 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.037200 | orchestrator | 16:52:41.037 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.037239 | orchestrator | 16:52:41.037 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.037296 | orchestrator | 16:52:41.037 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.037336 | orchestrator | 16:52:41.037 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.037376 | orchestrator | 16:52:41.037 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.037383 | orchestrator | 16:52:41.037 STDOUT terraform:  } 2025-06-02 16:52:41.037403 | orchestrator | 16:52:41.037 STDOUT terraform:  } 2025-06-02 16:52:41.037457 | orchestrator | 16:52:41.037 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-02 16:52:41.037508 | orchestrator | 16:52:41.037 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.037554 | orchestrator | 16:52:41.037 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.037596 | orchestrator | 16:52:41.037 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.037638 | orchestrator | 16:52:41.037 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.037681 | orchestrator | 16:52:41.037 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.037712 | orchestrator | 16:52:41.037 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.037734 | orchestrator | 16:52:41.037 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.037778 | orchestrator | 16:52:41.037 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.037819 | orchestrator | 16:52:41.037 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.037855 | orchestrator | 16:52:41.037 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.037886 | orchestrator | 16:52:41.037 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.037929 | orchestrator | 16:52:41.037 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.037973 | orchestrator | 16:52:41.037 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.038036 | orchestrator | 16:52:41.037 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.038095 | orchestrator | 16:52:41.038 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.038128 | orchestrator | 16:52:41.038 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.038165 | orchestrator | 16:52:41.038 STDOUT terraform:  + name = "testbed-node-4" 2025-06-02 16:52:41.038197 | orchestrator | 16:52:41.038 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.038241 | orchestrator | 16:52:41.038 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.038316 | orchestrator | 16:52:41.038 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.038347 | orchestrator | 16:52:41.038 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.038394 | orchestrator | 16:52:41.038 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.038454 | orchestrator | 16:52:41.038 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.038486 | orchestrator | 16:52:41.038 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.038516 | orchestrator | 16:52:41.038 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.038549 | orchestrator | 16:52:41.038 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.038586 | orchestrator | 16:52:41.038 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.038630 | orchestrator | 16:52:41.038 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.038659 | orchestrator | 16:52:41.038 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.038708 | orchestrator | 16:52:41.038 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.038715 | orchestrator | 16:52:41.038 STDOUT terraform:  } 2025-06-02 16:52:41.038735 | orchestrator | 16:52:41.038 STDOUT terraform:  + network { 2025-06-02 16:52:41.038756 | orchestrator | 16:52:41.038 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.038794 | orchestrator | 16:52:41.038 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.038832 | orchestrator | 16:52:41.038 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.038872 | orchestrator | 16:52:41.038 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.038910 | orchestrator | 16:52:41.038 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.038951 | orchestrator | 16:52:41.038 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.038992 | orchestrator | 16:52:41.038 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.039005 | orchestrator | 16:52:41.038 STDOUT terraform:  } 2025-06-02 16:52:41.039009 | orchestrator | 16:52:41.038 STDOUT terraform:  } 2025-06-02 16:52:41.039065 | orchestrator | 16:52:41.039 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-02 16:52:41.039118 | orchestrator | 16:52:41.039 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-02 16:52:41.039160 | orchestrator | 16:52:41.039 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-02 16:52:41.039202 | orchestrator | 16:52:41.039 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-02 16:52:41.039246 | orchestrator | 16:52:41.039 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-02 16:52:41.039319 | orchestrator | 16:52:41.039 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.039351 | orchestrator | 16:52:41.039 STDOUT terraform:  + availability_zone = "nova" 2025-06-02 16:52:41.039371 | orchestrator | 16:52:41.039 STDOUT terraform:  + config_drive = true 2025-06-02 16:52:41.039417 | orchestrator | 16:52:41.039 STDOUT terraform:  + created = (known after apply) 2025-06-02 16:52:41.039461 | orchestrator | 16:52:41.039 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-02 16:52:41.039497 | orchestrator | 16:52:41.039 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-02 16:52:41.039528 | orchestrator | 16:52:41.039 STDOUT terraform:  + force_delete = false 2025-06-02 16:52:41.039572 | orchestrator | 16:52:41.039 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-02 16:52:41.039615 | orchestrator | 16:52:41.039 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.039664 | orchestrator | 16:52:41.039 STDOUT terraform:  + image_id = (known after apply) 2025-06-02 16:52:41.040012 | orchestrator | 16:52:41.039 STDOUT terraform:  + image_name = (known after apply) 2025-06-02 16:52:41.040050 | orchestrator | 16:52:41.040 STDOUT terraform:  + key_pair = "testbed" 2025-06-02 16:52:41.040103 | orchestrator | 16:52:41.040 STDOUT terraform:  + name = "testbed-node-5" 2025-06-02 16:52:41.040137 | orchestrator | 16:52:41.040 STDOUT terraform:  + power_state = "active" 2025-06-02 16:52:41.040188 | orchestrator | 16:52:41.040 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.040234 | orchestrator | 16:52:41.040 STDOUT terraform:  + security_groups = (known after apply) 2025-06-02 16:52:41.040300 | orchestrator | 16:52:41.040 STDOUT terraform:  + stop_before_destroy = false 2025-06-02 16:52:41.040350 | orchestrator | 16:52:41.040 STDOUT terraform:  + updated = (known after apply) 2025-06-02 16:52:41.040419 | orchestrator | 16:52:41.040 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-02 16:52:41.040444 | orchestrator | 16:52:41.040 STDOUT terraform:  + block_device { 2025-06-02 16:52:41.040477 | orchestrator | 16:52:41.040 STDOUT terraform:  + boot_index = 0 2025-06-02 16:52:41.040518 | orchestrator | 16:52:41.040 STDOUT terraform:  + delete_on_termination = false 2025-06-02 16:52:41.040557 | orchestrator | 16:52:41.040 STDOUT terraform:  + destination_type = "volume" 2025-06-02 16:52:41.040597 | orchestrator | 16:52:41.040 STDOUT terraform:  + multiattach = false 2025-06-02 16:52:41.040638 | orchestrator | 16:52:41.040 STDOUT terraform:  + source_type = "volume" 2025-06-02 16:52:41.040689 | orchestrator | 16:52:41.040 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.040711 | orchestrator | 16:52:41.040 STDOUT terraform:  } 2025-06-02 16:52:41.040737 | orchestrator | 16:52:41.040 STDOUT terraform:  + network { 2025-06-02 16:52:41.040766 | orchestrator | 16:52:41.040 STDOUT terraform:  + access_network = false 2025-06-02 16:52:41.040808 | orchestrator | 16:52:41.040 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-02 16:52:41.040849 | orchestrator | 16:52:41.040 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-02 16:52:41.040893 | orchestrator | 16:52:41.040 STDOUT terraform:  + mac = (known after apply) 2025-06-02 16:52:41.040934 | orchestrator | 16:52:41.040 STDOUT terraform:  + name = (known after apply) 2025-06-02 16:52:41.040982 | orchestrator | 16:52:41.040 STDOUT terraform:  + port = (known after apply) 2025-06-02 16:52:41.041022 | orchestrator | 16:52:41.040 STDOUT terraform:  + uuid = (known after apply) 2025-06-02 16:52:41.041047 | orchestrator | 16:52:41.041 STDOUT terraform:  } 2025-06-02 16:52:41.041067 | orchestrator | 16:52:41.041 STDOUT terraform:  } 2025-06-02 16:52:41.041115 | orchestrator | 16:52:41.041 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-02 16:52:41.041161 | orchestrator | 16:52:41.041 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-02 16:52:41.041206 | orchestrator | 16:52:41.041 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-02 16:52:41.041266 | orchestrator | 16:52:41.041 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.041289 | orchestrator | 16:52:41.041 STDOUT terraform:  + name = "testbed" 2025-06-02 16:52:41.041325 | orchestrator | 16:52:41.041 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 16:52:41.041363 | orchestrator | 16:52:41.041 STDOUT terraform:  + public_key = (known after apply) 2025-06-02 16:52:41.041404 | orchestrator | 16:52:41.041 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.041443 | orchestrator | 16:52:41.041 STDOUT terraform:  + user_id = (known after apply) 2025-06-02 16:52:41.041469 | orchestrator | 16:52:41.041 STDOUT terraform:  } 2025-06-02 16:52:41.041534 | orchestrator | 16:52:41.041 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-02 16:52:41.041804 | orchestrator | 16:52:41.041 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.041997 | orchestrator | 16:52:41.041 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.042147 | orchestrator | 16:52:41.042 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.042237 | orchestrator | 16:52:41.042 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.042372 | orchestrator | 16:52:41.042 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.042459 | orchestrator | 16:52:41.042 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.042515 | orchestrator | 16:52:41.042 STDOUT terraform:  } 2025-06-02 16:52:41.042678 | orchestrator | 16:52:41.042 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-02 16:52:41.042837 | orchestrator | 16:52:41.042 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.042921 | orchestrator | 16:52:41.042 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.043016 | orchestrator | 16:52:41.042 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.043101 | orchestrator | 16:52:41.043 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.043195 | orchestrator | 16:52:41.043 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.043333 | orchestrator | 16:52:41.043 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.043377 | orchestrator | 16:52:41.043 STDOUT terraform:  } 2025-06-02 16:52:41.043534 | orchestrator | 16:52:41.043 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-02 16:52:41.043680 | orchestrator | 16:52:41.043 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.043779 | orchestrator | 16:52:41.043 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.043869 | orchestrator | 16:52:41.043 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.043952 | orchestrator | 16:52:41.043 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.044028 | orchestrator | 16:52:41.043 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.044108 | orchestrator | 16:52:41.044 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.044146 | orchestrator | 16:52:41.044 STDOUT terraform:  } 2025-06-02 16:52:41.044322 | orchestrator | 16:52:41.044 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-02 16:52:41.045151 | orchestrator | 16:52:41.044 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.045358 | orchestrator | 16:52:41.045 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.045438 | orchestrator | 16:52:41.045 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.045520 | orchestrator | 16:52:41.045 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.045593 | orchestrator | 16:52:41.045 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.045677 | orchestrator | 16:52:41.045 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.045718 | orchestrator | 16:52:41.045 STDOUT terraform:  } 2025-06-02 16:52:41.045854 | orchestrator | 16:52:41.045 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-02 16:52:41.045959 | orchestrator | 16:52:41.045 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.046057 | orchestrator | 16:52:41.045 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.046120 | orchestrator | 16:52:41.046 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.046190 | orchestrator | 16:52:41.046 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.046266 | orchestrator | 16:52:41.046 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.046341 | orchestrator | 16:52:41.046 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.046366 | orchestrator | 16:52:41.046 STDOUT terraform:  } 2025-06-02 16:52:41.046478 | orchestrator | 16:52:41.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-02 16:52:41.046580 | orchestrator | 16:52:41.046 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.046648 | orchestrator | 16:52:41.046 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.046715 | orchestrator | 16:52:41.046 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.046782 | orchestrator | 16:52:41.046 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.046844 | orchestrator | 16:52:41.046 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.046914 | orchestrator | 16:52:41.046 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.046946 | orchestrator | 16:52:41.046 STDOUT terraform:  } 2025-06-02 16:52:41.047052 | orchestrator | 16:52:41.046 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-02 16:52:41.047162 | orchestrator | 16:52:41.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.047225 | orchestrator | 16:52:41.047 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.047305 | orchestrator | 16:52:41.047 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.047366 | orchestrator | 16:52:41.047 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.047433 | orchestrator | 16:52:41.047 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.047495 | orchestrator | 16:52:41.047 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.047531 | orchestrator | 16:52:41.047 STDOUT terraform:  } 2025-06-02 16:52:41.047635 | orchestrator | 16:52:41.047 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-02 16:52:41.047743 | orchestrator | 16:52:41.047 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.047807 | orchestrator | 16:52:41.047 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.047875 | orchestrator | 16:52:41.047 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.047940 | orchestrator | 16:52:41.047 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.048008 | orchestrator | 16:52:41.047 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.048069 | orchestrator | 16:52:41.048 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.048108 | orchestrator | 16:52:41.048 STDOUT terraform:  } 2025-06-02 16:52:41.048215 | orchestrator | 16:52:41.048 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-02 16:52:41.048361 | orchestrator | 16:52:41.048 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-02 16:52:41.048413 | orchestrator | 16:52:41.048 STDOUT terraform:  + device = (known after apply) 2025-06-02 16:52:41.048473 | orchestrator | 16:52:41.048 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.048527 | orchestrator | 16:52:41.048 STDOUT terraform:  + instance_id = (known after apply) 2025-06-02 16:52:41.048585 | orchestrator | 16:52:41.048 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.048637 | orchestrator | 16:52:41.048 STDOUT terraform:  + volume_id = (known after apply) 2025-06-02 16:52:41.048670 | orchestrator | 16:52:41.048 STDOUT terraform:  } 2025-06-02 16:52:41.048779 | orchestrator | 16:52:41.048 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-02 16:52:41.053470 | orchestrator | 16:52:41.048 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-02 16:52:41.053509 | orchestrator | 16:52:41.053 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 16:52:41.053565 | orchestrator | 16:52:41.053 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-02 16:52:41.053616 | orchestrator | 16:52:41.053 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.053671 | orchestrator | 16:52:41.053 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:41.053720 | orchestrator | 16:52:41.053 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.053750 | orchestrator | 16:52:41.053 STDOUT terraform:  } 2025-06-02 16:52:41.053831 | orchestrator | 16:52:41.053 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-02 16:52:41.053910 | orchestrator | 16:52:41.053 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-02 16:52:41.053959 | orchestrator | 16:52:41.053 STDOUT terraform:  + address = (known after apply) 2025-06-02 16:52:41.054004 | orchestrator | 16:52:41.053 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.054075 | orchestrator | 16:52:41.054 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 16:52:41.054119 | orchestrator | 16:52:41.054 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.054261 | orchestrator | 16:52:41.054 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-02 16:52:41.054328 | orchestrator | 16:52:41.054 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.054371 | orchestrator | 16:52:41.054 STDOUT terraform:  + pool = "public" 2025-06-02 16:52:41.054517 | orchestrator | 16:52:41.054 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:41.054566 | orchestrator | 16:52:41.054 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.054623 | orchestrator | 16:52:41.054 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.054658 | orchestrator | 16:52:41.054 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.054684 | orchestrator | 16:52:41.054 STDOUT terraform:  } 2025-06-02 16:52:41.054790 | orchestrator | 16:52:41.054 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-02 16:52:41.062312 | orchestrator | 16:52:41.054 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-02 16:52:41.062385 | orchestrator | 16:52:41.062 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.062899 | orchestrator | 16:52:41.062 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.062951 | orchestrator | 16:52:41.062 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 16:52:41.062978 | orchestrator | 16:52:41.062 STDOUT terraform:  + "nova", 2025-06-02 16:52:41.063005 | orchestrator | 16:52:41.062 STDOUT terraform:  ] 2025-06-02 16:52:41.063068 | orchestrator | 16:52:41.063 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-02 16:52:41.063128 | orchestrator | 16:52:41.063 STDOUT terraform:  + external = (known after apply) 2025-06-02 16:52:41.063189 | orchestrator | 16:52:41.063 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.063264 | orchestrator | 16:52:41.063 STDOUT terraform:  + mtu = (known after apply) 2025-06-02 16:52:41.063335 | orchestrator | 16:52:41.063 STDOUT terraform:  + name = "net-testbed-management" 2025-06-02 16:52:41.063391 | orchestrator | 16:52:41.063 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.063453 | orchestrator | 16:52:41.063 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.063518 | orchestrator | 16:52:41.063 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.063580 | orchestrator | 16:52:41.063 STDOUT terraform:  + shared = (known after apply) 2025-06-02 16:52:41.063635 | orchestrator | 16:52:41.063 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.063692 | orchestrator | 16:52:41.063 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-02 16:52:41.063740 | orchestrator | 16:52:41.063 STDOUT terraform:  + segments (known after apply) 2025-06-02 16:52:41.063747 | orchestrator | 16:52:41.063 STDOUT terraform:  } 2025-06-02 16:52:41.063825 | orchestrator | 16:52:41.063 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-02 16:52:41.063902 | orchestrator | 16:52:41.063 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-02 16:52:41.063955 | orchestrator | 16:52:41.063 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.064015 | orchestrator | 16:52:41.063 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.064071 | orchestrator | 16:52:41.064 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.064130 | orchestrator | 16:52:41.064 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.064189 | orchestrator | 16:52:41.064 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.064246 | orchestrator | 16:52:41.064 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.064338 | orchestrator | 16:52:41.064 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.064396 | orchestrator | 16:52:41.064 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.064457 | orchestrator | 16:52:41.064 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.064513 | orchestrator | 16:52:41.064 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.064573 | orchestrator | 16:52:41.064 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.064629 | orchestrator | 16:52:41.064 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.064709 | orchestrator | 16:52:41.064 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.064754 | orchestrator | 16:52:41.064 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.064809 | orchestrator | 16:52:41.064 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.064869 | orchestrator | 16:52:41.064 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.064904 | orchestrator | 16:52:41.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.064950 | orchestrator | 16:52:41.064 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.064960 | orchestrator | 16:52:41.064 STDOUT terraform:  } 2025-06-02 16:52:41.064998 | orchestrator | 16:52:41.064 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.065045 | orchestrator | 16:52:41.064 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.065068 | orchestrator | 16:52:41.065 STDOUT terraform:  } 2025-06-02 16:52:41.065106 | orchestrator | 16:52:41.065 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.065163 | orchestrator | 16:52:41.065 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.065168 | orchestrator | 16:52:41.065 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-02 16:52:41.065807 | orchestrator | 16:52:41.065 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.065817 | orchestrator | 16:52:41.065 STDOUT terraform:  } 2025-06-02 16:52:41.065821 | orchestrator | 16:52:41.065 STDOUT terraform:  } 2025-06-02 16:52:41.065825 | orchestrator | 16:52:41.065 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-02 16:52:41.065830 | orchestrator | 16:52:41.065 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.065834 | orchestrator | 16:52:41.065 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.065838 | orchestrator | 16:52:41.065 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.065841 | orchestrator | 16:52:41.065 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.065845 | orchestrator | 16:52:41.065 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.065850 | orchestrator | 16:52:41.065 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.065853 | orchestrator | 16:52:41.065 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.065857 | orchestrator | 16:52:41.065 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.065872 | orchestrator | 16:52:41.065 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.065876 | orchestrator | 16:52:41.065 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.065928 | orchestrator | 16:52:41.065 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.065986 | orchestrator | 16:52:41.065 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.066074 | orchestrator | 16:52:41.065 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.066147 | orchestrator | 16:52:41.066 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.066203 | orchestrator | 16:52:41.066 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.066285 | orchestrator | 16:52:41.066 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.066347 | orchestrator | 16:52:41.066 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.066380 | orchestrator | 16:52:41.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.066426 | orchestrator | 16:52:41.066 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.066448 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.066480 | orchestrator | 16:52:41.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.066526 | orchestrator | 16:52:41.066 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.066551 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.066582 | orchestrator | 16:52:41.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.066628 | orchestrator | 16:52:41.066 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.066651 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.066682 | orchestrator | 16:52:41.066 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.066726 | orchestrator | 16:52:41.066 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.066752 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.066784 | orchestrator | 16:52:41.066 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.066810 | orchestrator | 16:52:41.066 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.066848 | orchestrator | 16:52:41.066 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-02 16:52:41.066891 | orchestrator | 16:52:41.066 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.066910 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.066933 | orchestrator | 16:52:41.066 STDOUT terraform:  } 2025-06-02 16:52:41.067004 | orchestrator | 16:52:41.066 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-02 16:52:41.067089 | orchestrator | 16:52:41.066 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.067144 | orchestrator | 16:52:41.067 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.067198 | orchestrator | 16:52:41.067 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.067268 | orchestrator | 16:52:41.067 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.067344 | orchestrator | 16:52:41.067 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.067392 | orchestrator | 16:52:41.067 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.067451 | orchestrator | 16:52:41.067 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.067510 | orchestrator | 16:52:41.067 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.067566 | orchestrator | 16:52:41.067 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.067624 | orchestrator | 16:52:41.067 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.067686 | orchestrator | 16:52:41.067 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.067752 | orchestrator | 16:52:41.067 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.067807 | orchestrator | 16:52:41.067 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.067870 | orchestrator | 16:52:41.067 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.067915 | orchestrator | 16:52:41.067 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.067965 | orchestrator | 16:52:41.067 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.068016 | orchestrator | 16:52:41.067 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.068046 | orchestrator | 16:52:41.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.068100 | orchestrator | 16:52:41.068 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.068106 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068131 | orchestrator | 16:52:41.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.068174 | orchestrator | 16:52:41.068 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.068196 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068225 | orchestrator | 16:52:41.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.068302 | orchestrator | 16:52:41.068 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.068329 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068360 | orchestrator | 16:52:41.068 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.068407 | orchestrator | 16:52:41.068 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.068427 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068465 | orchestrator | 16:52:41.068 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.068489 | orchestrator | 16:52:41.068 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.068524 | orchestrator | 16:52:41.068 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-02 16:52:41.068565 | orchestrator | 16:52:41.068 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.068589 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068603 | orchestrator | 16:52:41.068 STDOUT terraform:  } 2025-06-02 16:52:41.068663 | orchestrator | 16:52:41.068 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-02 16:52:41.068726 | orchestrator | 16:52:41.068 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.068776 | orchestrator | 16:52:41.068 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.068849 | orchestrator | 16:52:41.068 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.069456 | orchestrator | 16:52:41.068 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.069503 | orchestrator | 16:52:41.069 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.069554 | orchestrator | 16:52:41.069 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.069607 | orchestrator | 16:52:41.069 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.069657 | orchestrator | 16:52:41.069 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.069705 | orchestrator | 16:52:41.069 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.069762 | orchestrator | 16:52:41.069 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.069807 | orchestrator | 16:52:41.069 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.069853 | orchestrator | 16:52:41.069 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.069896 | orchestrator | 16:52:41.069 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.069942 | orchestrator | 16:52:41.069 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.069990 | orchestrator | 16:52:41.069 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.070056 | orchestrator | 16:52:41.069 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.070111 | orchestrator | 16:52:41.070 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.070138 | orchestrator | 16:52:41.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.070175 | orchestrator | 16:52:41.070 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.070195 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070222 | orchestrator | 16:52:41.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.070282 | orchestrator | 16:52:41.070 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.070288 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070309 | orchestrator | 16:52:41.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.070346 | orchestrator | 16:52:41.070 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.070370 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070383 | orchestrator | 16:52:41.070 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.070422 | orchestrator | 16:52:41.070 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.070443 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070471 | orchestrator | 16:52:41.070 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.070493 | orchestrator | 16:52:41.070 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.070526 | orchestrator | 16:52:41.070 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-02 16:52:41.070564 | orchestrator | 16:52:41.070 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.070571 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070595 | orchestrator | 16:52:41.070 STDOUT terraform:  } 2025-06-02 16:52:41.070653 | orchestrator | 16:52:41.070 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-02 16:52:41.070710 | orchestrator | 16:52:41.070 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.070754 | orchestrator | 16:52:41.070 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.070801 | orchestrator | 16:52:41.070 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.070846 | orchestrator | 16:52:41.070 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.070892 | orchestrator | 16:52:41.070 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.070936 | orchestrator | 16:52:41.070 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.070981 | orchestrator | 16:52:41.070 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.071026 | orchestrator | 16:52:41.070 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.071073 | orchestrator | 16:52:41.071 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.071119 | orchestrator | 16:52:41.071 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.071164 | orchestrator | 16:52:41.071 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.071209 | orchestrator | 16:52:41.071 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.071265 | orchestrator | 16:52:41.071 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.071324 | orchestrator | 16:52:41.071 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.071371 | orchestrator | 16:52:41.071 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.071419 | orchestrator | 16:52:41.071 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.071464 | orchestrator | 16:52:41.071 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.071493 | orchestrator | 16:52:41.071 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.071531 | orchestrator | 16:52:41.071 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.071541 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.071573 | orchestrator | 16:52:41.071 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.071611 | orchestrator | 16:52:41.071 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.071623 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.071650 | orchestrator | 16:52:41.071 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.071687 | orchestrator | 16:52:41.071 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.071718 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.071747 | orchestrator | 16:52:41.071 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.071786 | orchestrator | 16:52:41.071 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.071805 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.071836 | orchestrator | 16:52:41.071 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.071857 | orchestrator | 16:52:41.071 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.071896 | orchestrator | 16:52:41.071 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-02 16:52:41.071935 | orchestrator | 16:52:41.071 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.071943 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.071967 | orchestrator | 16:52:41.071 STDOUT terraform:  } 2025-06-02 16:52:41.072023 | orchestrator | 16:52:41.071 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-02 16:52:41.072082 | orchestrator | 16:52:41.072 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.072125 | orchestrator | 16:52:41.072 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.072170 | orchestrator | 16:52:41.072 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.072214 | orchestrator | 16:52:41.072 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.072286 | orchestrator | 16:52:41.072 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.072321 | orchestrator | 16:52:41.072 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.072366 | orchestrator | 16:52:41.072 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.072411 | orchestrator | 16:52:41.072 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.072465 | orchestrator | 16:52:41.072 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.072513 | orchestrator | 16:52:41.072 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.072559 | orchestrator | 16:52:41.072 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.072608 | orchestrator | 16:52:41.072 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.072654 | orchestrator | 16:52:41.072 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.072698 | orchestrator | 16:52:41.072 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.072742 | orchestrator | 16:52:41.072 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.072793 | orchestrator | 16:52:41.072 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.072835 | orchestrator | 16:52:41.072 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.072861 | orchestrator | 16:52:41.072 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.072900 | orchestrator | 16:52:41.072 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.072922 | orchestrator | 16:52:41.072 STDOUT terraform:  } 2025-06-02 16:52:41.072949 | orchestrator | 16:52:41.072 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.072987 | orchestrator | 16:52:41.072 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.072993 | orchestrator | 16:52:41.072 STDOUT terraform:  } 2025-06-02 16:52:41.073024 | orchestrator | 16:52:41.072 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.073061 | orchestrator | 16:52:41.073 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.073070 | orchestrator | 16:52:41.073 STDOUT terraform:  } 2025-06-02 16:52:41.073102 | orchestrator | 16:52:41.073 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.073137 | orchestrator | 16:52:41.073 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.073158 | orchestrator | 16:52:41.073 STDOUT terraform:  } 2025-06-02 16:52:41.073189 | orchestrator | 16:52:41.073 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.073210 | orchestrator | 16:52:41.073 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.073244 | orchestrator | 16:52:41.073 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-02 16:52:41.073311 | orchestrator | 16:52:41.073 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.073339 | orchestrator | 16:52:41.073 STDOUT terraform:  } 2025-06-02 16:52:41.073359 | orchestrator | 16:52:41.073 STDOUT terraform:  } 2025-06-02 16:52:41.073419 | orchestrator | 16:52:41.073 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-02 16:52:41.073476 | orchestrator | 16:52:41.073 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-02 16:52:41.073524 | orchestrator | 16:52:41.073 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.073569 | orchestrator | 16:52:41.073 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-02 16:52:41.073615 | orchestrator | 16:52:41.073 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-02 16:52:41.073661 | orchestrator | 16:52:41.073 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.073711 | orchestrator | 16:52:41.073 STDOUT terraform:  + device_id = (known after apply) 2025-06-02 16:52:41.073756 | orchestrator | 16:52:41.073 STDOUT terraform:  + device_owner = (known after apply) 2025-06-02 16:52:41.073803 | orchestrator | 16:52:41.073 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-02 16:52:41.073850 | orchestrator | 16:52:41.073 STDOUT terraform:  + dns_name = (known after apply) 2025-06-02 16:52:41.073896 | orchestrator | 16:52:41.073 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.073942 | orchestrator | 16:52:41.073 STDOUT terraform:  + mac_address = (known after apply) 2025-06-02 16:52:41.073988 | orchestrator | 16:52:41.073 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.074063 | orchestrator | 16:52:41.073 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-02 16:52:41.074108 | orchestrator | 16:52:41.074 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-02 16:52:41.074155 | orchestrator | 16:52:41.074 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.074204 | orchestrator | 16:52:41.074 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-02 16:52:41.074270 | orchestrator | 16:52:41.074 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.074300 | orchestrator | 16:52:41.074 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.074333 | orchestrator | 16:52:41.074 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-02 16:52:41.074353 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074387 | orchestrator | 16:52:41.074 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.074427 | orchestrator | 16:52:41.074 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-02 16:52:41.074447 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074474 | orchestrator | 16:52:41.074 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.074510 | orchestrator | 16:52:41.074 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-02 16:52:41.074518 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074551 | orchestrator | 16:52:41.074 STDOUT terraform:  + allowed_address_pairs { 2025-06-02 16:52:41.074586 | orchestrator | 16:52:41.074 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-02 16:52:41.074607 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074637 | orchestrator | 16:52:41.074 STDOUT terraform:  + binding (known after apply) 2025-06-02 16:52:41.074647 | orchestrator | 16:52:41.074 STDOUT terraform:  + fixed_ip { 2025-06-02 16:52:41.074685 | orchestrator | 16:52:41.074 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-02 16:52:41.074722 | orchestrator | 16:52:41.074 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.074730 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074753 | orchestrator | 16:52:41.074 STDOUT terraform:  } 2025-06-02 16:52:41.074818 | orchestrator | 16:52:41.074 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-02 16:52:41.074880 | orchestrator | 16:52:41.074 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-02 16:52:41.074905 | orchestrator | 16:52:41.074 STDOUT terraform:  + force_destroy = false 2025-06-02 16:52:41.074941 | orchestrator | 16:52:41.074 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.074979 | orchestrator | 16:52:41.074 STDOUT terraform:  + port_id = (known after apply) 2025-06-02 16:52:41.075015 | orchestrator | 16:52:41.074 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.075052 | orchestrator | 16:52:41.075 STDOUT terraform:  + router_id = (known after apply) 2025-06-02 16:52:41.075088 | orchestrator | 16:52:41.075 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-02 16:52:41.075096 | orchestrator | 16:52:41.075 STDOUT terraform:  } 2025-06-02 16:52:41.075147 | orchestrator | 16:52:41.075 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-02 16:52:41.075193 | orchestrator | 16:52:41.075 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-02 16:52:41.075238 | orchestrator | 16:52:41.075 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-02 16:52:41.075323 | orchestrator | 16:52:41.075 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.075357 | orchestrator | 16:52:41.075 STDOUT terraform:  + availability_zone_hints = [ 2025-06-02 16:52:41.075380 | orchestrator | 16:52:41.075 STDOUT terraform:  + "nova", 2025-06-02 16:52:41.075388 | orchestrator | 16:52:41.075 STDOUT terraform:  ] 2025-06-02 16:52:41.075442 | orchestrator | 16:52:41.075 STDOUT terraform:  + distributed = (known after apply) 2025-06-02 16:52:41.075486 | orchestrator | 16:52:41.075 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-02 16:52:41.075547 | orchestrator | 16:52:41.075 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-02 16:52:41.075595 | orchestrator | 16:52:41.075 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.075633 | orchestrator | 16:52:41.075 STDOUT terraform:  + name = "testbed" 2025-06-02 16:52:41.075680 | orchestrator | 16:52:41.075 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.075729 | orchestrator | 16:52:41.075 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.075766 | orchestrator | 16:52:41.075 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-02 16:52:41.075774 | orchestrator | 16:52:41.075 STDOUT terraform:  } 2025-06-02 16:52:41.075852 | orchestrator | 16:52:41.075 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-02 16:52:41.075920 | orchestrator | 16:52:41.075 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-02 16:52:41.075947 | orchestrator | 16:52:41.075 STDOUT terraform:  + description = "ssh" 2025-06-02 16:52:41.075981 | orchestrator | 16:52:41.075 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.076021 | orchestrator | 16:52:41.075 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.076064 | orchestrator | 16:52:41.076 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.076091 | orchestrator | 16:52:41.076 STDOUT terraform:  + port_range_max = 22 2025-06-02 16:52:41.076121 | orchestrator | 16:52:41.076 STDOUT terraform:  + port_range_min = 22 2025-06-02 16:52:41.076146 | orchestrator | 16:52:41.076 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:41.076189 | orchestrator | 16:52:41.076 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.076229 | orchestrator | 16:52:41.076 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.076280 | orchestrator | 16:52:41.076 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.076317 | orchestrator | 16:52:41.076 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.076360 | orchestrator | 16:52:41.076 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.076379 | orchestrator | 16:52:41.076 STDOUT terraform:  } 2025-06-02 16:52:41.076451 | orchestrator | 16:52:41.076 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-02 16:52:41.076519 | orchestrator | 16:52:41.076 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-02 16:52:41.076554 | orchestrator | 16:52:41.076 STDOUT terraform:  + description = "wireguard" 2025-06-02 16:52:41.076585 | orchestrator | 16:52:41.076 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.076614 | orchestrator | 16:52:41.076 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.076655 | orchestrator | 16:52:41.076 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.076682 | orchestrator | 16:52:41.076 STDOUT terraform:  + port_range_max = 51820 2025-06-02 16:52:41.076712 | orchestrator | 16:52:41.076 STDOUT terraform:  + port_range_min = 51820 2025-06-02 16:52:41.076739 | orchestrator | 16:52:41.076 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:41.076779 | orchestrator | 16:52:41.076 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.076818 | orchestrator | 16:52:41.076 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.076852 | orchestrator | 16:52:41.076 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.076894 | orchestrator | 16:52:41.076 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.076934 | orchestrator | 16:52:41.076 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.076945 | orchestrator | 16:52:41.076 STDOUT terraform:  } 2025-06-02 16:52:41.077016 | orchestrator | 16:52:41.076 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-02 16:52:41.077089 | orchestrator | 16:52:41.077 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-02 16:52:41.077121 | orchestrator | 16:52:41.077 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.077148 | orchestrator | 16:52:41.077 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.077194 | orchestrator | 16:52:41.077 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.077221 | orchestrator | 16:52:41.077 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:41.077289 | orchestrator | 16:52:41.077 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.077329 | orchestrator | 16:52:41.077 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.077371 | orchestrator | 16:52:41.077 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 16:52:41.077413 | orchestrator | 16:52:41.077 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.077455 | orchestrator | 16:52:41.077 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.077466 | orchestrator | 16:52:41.077 STDOUT terraform:  } 2025-06-02 16:52:41.077541 | orchestrator | 16:52:41.077 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-02 16:52:41.077650 | orchestrator | 16:52:41.077 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-02 16:52:41.077684 | orchestrator | 16:52:41.077 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.077721 | orchestrator | 16:52:41.077 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.077788 | orchestrator | 16:52:41.077 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.077821 | orchestrator | 16:52:41.077 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:41.077865 | orchestrator | 16:52:41.077 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.077904 | orchestrator | 16:52:41.077 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.077944 | orchestrator | 16:52:41.077 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-02 16:52:41.077988 | orchestrator | 16:52:41.077 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.078057 | orchestrator | 16:52:41.077 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.078080 | orchestrator | 16:52:41.078 STDOUT terraform:  } 2025-06-02 16:52:41.078156 | orchestrator | 16:52:41.078 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-02 16:52:41.078229 | orchestrator | 16:52:41.078 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-02 16:52:41.078334 | orchestrator | 16:52:41.078 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.078384 | orchestrator | 16:52:41.078 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.078430 | orchestrator | 16:52:41.078 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.078453 | orchestrator | 16:52:41.078 STDOUT terraform:  + prot 2025-06-02 16:52:41.078520 | orchestrator | 16:52:41.078 STDOUT terraform: ocol = "icmp" 2025-06-02 16:52:41.078565 | orchestrator | 16:52:41.078 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.078611 | orchestrator | 16:52:41.078 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.078649 | orchestrator | 16:52:41.078 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.078690 | orchestrator | 16:52:41.078 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.078731 | orchestrator | 16:52:41.078 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.078753 | orchestrator | 16:52:41.078 STDOUT terraform:  } 2025-06-02 16:52:41.078823 | orchestrator | 16:52:41.078 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-02 16:52:41.078896 | orchestrator | 16:52:41.078 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-02 16:52:41.078929 | orchestrator | 16:52:41.078 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.078958 | orchestrator | 16:52:41.078 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.079000 | orchestrator | 16:52:41.078 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.079038 | orchestrator | 16:52:41.078 STDOUT terraform:  + protocol = "tcp" 2025-06-02 16:52:41.079080 | orchestrator | 16:52:41.079 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.079126 | orchestrator | 16:52:41.079 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.079160 | orchestrator | 16:52:41.079 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.079202 | orchestrator | 16:52:41.079 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.079245 | orchestrator | 16:52:41.079 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.079298 | orchestrator | 16:52:41.079 STDOUT terraform:  } 2025-06-02 16:52:41.079368 | orchestrator | 16:52:41.079 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-02 16:52:41.079440 | orchestrator | 16:52:41.079 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-02 16:52:41.079474 | orchestrator | 16:52:41.079 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.079503 | orchestrator | 16:52:41.079 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.079547 | orchestrator | 16:52:41.079 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.079576 | orchestrator | 16:52:41.079 STDOUT terraform:  + protocol = "udp" 2025-06-02 16:52:41.079625 | orchestrator | 16:52:41.079 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.079663 | orchestrator | 16:52:41.079 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.079699 | orchestrator | 16:52:41.079 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.079742 | orchestrator | 16:52:41.079 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.079785 | orchestrator | 16:52:41.079 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.079795 | orchestrator | 16:52:41.079 STDOUT terraform:  } 2025-06-02 16:52:41.079868 | orchestrator | 16:52:41.079 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-02 16:52:41.079942 | orchestrator | 16:52:41.079 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-02 16:52:41.079988 | orchestrator | 16:52:41.079 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.079995 | orchestrator | 16:52:41.079 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.080066 | orchestrator | 16:52:41.079 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.080072 | orchestrator | 16:52:41.080 STDOUT terraform:  + protocol = "icmp" 2025-06-02 16:52:41.080111 | orchestrator | 16:52:41.080 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.080153 | orchestrator | 16:52:41.080 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.080184 | orchestrator | 16:52:41.080 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.080226 | orchestrator | 16:52:41.080 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.080283 | orchestrator | 16:52:41.080 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.080291 | orchestrator | 16:52:41.080 STDOUT terraform:  } 2025-06-02 16:52:41.080367 | orchestrator | 16:52:41.080 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-02 16:52:41.080438 | orchestrator | 16:52:41.080 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-02 16:52:41.080488 | orchestrator | 16:52:41.080 STDOUT terraform:  + description = "vrrp" 2025-06-02 16:52:41.080494 | orchestrator | 16:52:41.080 STDOUT terraform:  + direction = "ingress" 2025-06-02 16:52:41.080520 | orchestrator | 16:52:41.080 STDOUT terraform:  + ethertype = "IPv4" 2025-06-02 16:52:41.080565 | orchestrator | 16:52:41.080 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.080599 | orchestrator | 16:52:41.080 STDOUT terraform:  + protocol = "112" 2025-06-02 16:52:41.080636 | orchestrator | 16:52:41.080 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.080679 | orchestrator | 16:52:41.080 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-02 16:52:41.080713 | orchestrator | 16:52:41.080 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-02 16:52:41.080753 | orchestrator | 16:52:41.080 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-02 16:52:41.080795 | orchestrator | 16:52:41.080 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.080804 | orchestrator | 16:52:41.080 STDOUT terraform:  } 2025-06-02 16:52:41.080878 | orchestrator | 16:52:41.080 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-02 16:52:41.080944 | orchestrator | 16:52:41.080 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-02 16:52:41.080982 | orchestrator | 16:52:41.080 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.081028 | orchestrator | 16:52:41.080 STDOUT terraform:  + description = "management security group" 2025-06-02 16:52:41.081070 | orchestrator | 16:52:41.081 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.081109 | orchestrator | 16:52:41.081 STDOUT terraform:  + name = "testbed-management" 2025-06-02 16:52:41.081148 | orchestrator | 16:52:41.081 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.081188 | orchestrator | 16:52:41.081 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 16:52:41.081227 | orchestrator | 16:52:41.081 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.081235 | orchestrator | 16:52:41.081 STDOUT terraform:  } 2025-06-02 16:52:41.081323 | orchestrator | 16:52:41.081 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-02 16:52:41.081391 | orchestrator | 16:52:41.081 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-02 16:52:41.081427 | orchestrator | 16:52:41.081 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.081469 | orchestrator | 16:52:41.081 STDOUT terraform:  + description = "node security group" 2025-06-02 16:52:41.081508 | orchestrator | 16:52:41.081 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.081544 | orchestrator | 16:52:41.081 STDOUT terraform:  + name = "testbed-node" 2025-06-02 16:52:41.081585 | orchestrator | 16:52:41.081 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.081624 | orchestrator | 16:52:41.081 STDOUT terraform:  + stateful = (known after apply) 2025-06-02 16:52:41.081665 | orchestrator | 16:52:41.081 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.081672 | orchestrator | 16:52:41.081 STDOUT terraform:  } 2025-06-02 16:52:41.081739 | orchestrator | 16:52:41.081 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-02 16:52:41.081801 | orchestrator | 16:52:41.081 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-02 16:52:41.081844 | orchestrator | 16:52:41.081 STDOUT terraform:  + all_tags = (known after apply) 2025-06-02 16:52:41.081887 | orchestrator | 16:52:41.081 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-02 16:52:41.081915 | orchestrator | 16:52:41.081 STDOUT terraform:  + dns_nameservers = [ 2025-06-02 16:52:41.081942 | orchestrator | 16:52:41.081 STDOUT terraform:  + "8.8.8.8", 2025-06-02 16:52:41.081966 | orchestrator | 16:52:41.081 STDOUT terraform:  + "9.9.9.9", 2025-06-02 16:52:41.081973 | orchestrator | 16:52:41.081 STDOUT terraform:  ] 2025-06-02 16:52:41.082010 | orchestrator | 16:52:41.081 STDOUT terraform:  + enable_dhcp = true 2025-06-02 16:52:41.082075 | orchestrator | 16:52:41.082 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-02 16:52:41.082119 | orchestrator | 16:52:41.082 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.082152 | orchestrator | 16:52:41.082 STDOUT terraform:  + ip_version = 4 2025-06-02 16:52:41.082193 | orchestrator | 16:52:41.082 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-02 16:52:41.082237 | orchestrator | 16:52:41.082 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-02 16:52:41.082320 | orchestrator | 16:52:41.082 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-02 16:52:41.082378 | orchestrator | 16:52:41.082 STDOUT terraform:  + network_id = (known after apply) 2025-06-02 16:52:41.082409 | orchestrator | 16:52:41.082 STDOUT terraform:  + no_gateway = false 2025-06-02 16:52:41.082452 | orchestrator | 16:52:41.082 STDOUT terraform:  + region = (known after apply) 2025-06-02 16:52:41.082518 | orchestrator | 16:52:41.082 STDOUT terraform:  + service_types = (known after apply) 2025-06-02 16:52:41.082558 | orchestrator | 16:52:41.082 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-02 16:52:41.082586 | orchestrator | 16:52:41.082 STDOUT terraform:  + allocation_pool { 2025-06-02 16:52:41.082620 | orchestrator | 16:52:41.082 STDOUT terraform:  + end = "192.168.31.250" 2025-06-02 16:52:41.082653 | orchestrator | 16:52:41.082 STDOUT terraform:  + start = "192.168.31.200" 2025-06-02 16:52:41.082659 | orchestrator | 16:52:41.082 STDOUT terraform:  } 2025-06-02 16:52:41.082687 | orchestrator | 16:52:41.082 STDOUT terraform:  } 2025-06-02 16:52:41.082717 | orchestrator | 16:52:41.082 STDOUT terraform:  # terraform_data.image will be created 2025-06-02 16:52:41.082778 | orchestrator | 16:52:41.082 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-02 16:52:41.082786 | orchestrator | 16:52:41.082 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.082807 | orchestrator | 16:52:41.082 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 16:52:41.082840 | orchestrator | 16:52:41.082 STDOUT terraform:  + output = (known after apply) 2025-06-02 16:52:41.082848 | orchestrator | 16:52:41.082 STDOUT terraform:  } 2025-06-02 16:52:41.082891 | orchestrator | 16:52:41.082 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-02 16:52:41.082928 | orchestrator | 16:52:41.082 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-02 16:52:41.082961 | orchestrator | 16:52:41.082 STDOUT terraform:  + id = (known after apply) 2025-06-02 16:52:41.082990 | orchestrator | 16:52:41.082 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-02 16:52:41.083024 | orchestrator | 16:52:41.082 STDOUT terraform:  + output = (known after apply) 2025-06-02 16:52:41.083031 | orchestrator | 16:52:41.083 STDOUT terraform:  } 2025-06-02 16:52:41.083077 | orchestrator | 16:52:41.083 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-02 16:52:41.083085 | orchestrator | 16:52:41.083 STDOUT terraform: Changes to Outputs: 2025-06-02 16:52:41.083124 | orchestrator | 16:52:41.083 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-02 16:52:41.083157 | orchestrator | 16:52:41.083 STDOUT terraform:  + private_key = (sensitive value) 2025-06-02 16:52:41.295278 | orchestrator | 16:52:41.295 STDOUT terraform: terraform_data.image: Creating... 2025-06-02 16:52:41.295401 | orchestrator | 16:52:41.295 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-02 16:52:41.295419 | orchestrator | 16:52:41.295 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=4b680754-6bd8-2b91-8220-d238d80f1116] 2025-06-02 16:52:41.295476 | orchestrator | 16:52:41.295 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=b3297797-7ea4-0d8f-d845-354c685bef78] 2025-06-02 16:52:41.324453 | orchestrator | 16:52:41.324 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-02 16:52:41.324970 | orchestrator | 16:52:41.324 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-02 16:52:41.332313 | orchestrator | 16:52:41.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-02 16:52:41.332510 | orchestrator | 16:52:41.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-02 16:52:41.332646 | orchestrator | 16:52:41.332 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-02 16:52:41.333334 | orchestrator | 16:52:41.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-02 16:52:41.336821 | orchestrator | 16:52:41.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-02 16:52:41.336883 | orchestrator | 16:52:41.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-02 16:52:41.336890 | orchestrator | 16:52:41.334 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-02 16:52:41.342845 | orchestrator | 16:52:41.342 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-02 16:52:41.773690 | orchestrator | 16:52:41.773 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 16:52:41.778444 | orchestrator | 16:52:41.778 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-02 16:52:41.783479 | orchestrator | 16:52:41.783 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-02 16:52:41.785165 | orchestrator | 16:52:41.785 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-02 16:52:41.834172 | orchestrator | 16:52:41.833 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-02 16:52:41.842520 | orchestrator | 16:52:41.842 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-02 16:52:47.716972 | orchestrator | 16:52:47.716 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 7s [id=1fb60d1f-7650-478d-bb56-5e874c3b9874] 2025-06-02 16:52:47.728680 | orchestrator | 16:52:47.728 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-02 16:52:51.334302 | orchestrator | 16:52:51.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-02 16:52:51.334406 | orchestrator | 16:52:51.333 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-02 16:52:51.335001 | orchestrator | 16:52:51.334 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-02 16:52:51.336220 | orchestrator | 16:52:51.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:52:51.336292 | orchestrator | 16:52:51.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-02 16:52:51.343415 | orchestrator | 16:52:51.343 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-02 16:52:51.784631 | orchestrator | 16:52:51.784 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-02 16:52:51.786511 | orchestrator | 16:52:51.786 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-02 16:52:51.844103 | orchestrator | 16:52:51.843 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-02 16:52:51.907435 | orchestrator | 16:52:51.906 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=53941cc3-a8ff-45b3-9c82-286f81867ab6] 2025-06-02 16:52:51.919870 | orchestrator | 16:52:51.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=4a588e14-c726-4684-ac8a-ec1bcbcaf53d] 2025-06-02 16:52:51.921168 | orchestrator | 16:52:51.920 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-02 16:52:51.925771 | orchestrator | 16:52:51.925 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-02 16:52:51.926801 | orchestrator | 16:52:51.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=cab884bf-6138-4574-8f5c-e044606bea62] 2025-06-02 16:52:51.929034 | orchestrator | 16:52:51.928 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=c7f9d288-1a32-443d-a362-6ba679ef0f8f] 2025-06-02 16:52:51.932152 | orchestrator | 16:52:51.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-02 16:52:51.934854 | orchestrator | 16:52:51.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-02 16:52:51.935016 | orchestrator | 16:52:51.934 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3] 2025-06-02 16:52:51.940936 | orchestrator | 16:52:51.940 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-02 16:52:51.954966 | orchestrator | 16:52:51.953 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=f446ae25-d9a7-444f-b563-a9cba680652a] 2025-06-02 16:52:51.973674 | orchestrator | 16:52:51.973 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-02 16:52:51.980964 | orchestrator | 16:52:51.980 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=48682cc10d1c0e571daa11603c6c7ccc6d10e4ee] 2025-06-02 16:52:51.987627 | orchestrator | 16:52:51.987 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=42dd6fc7-77c1-48dd-afcf-d774f79f6bbd] 2025-06-02 16:52:51.989873 | orchestrator | 16:52:51.989 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-02 16:52:52.000295 | orchestrator | 16:52:52.000 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-02 16:52:52.004230 | orchestrator | 16:52:52.004 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=26f390cb9fd1f091f70d5c88960a752c71446229] 2025-06-02 16:52:52.004764 | orchestrator | 16:52:52.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=075a40bb-072b-46c1-930e-3c0277237be4] 2025-06-02 16:52:52.009147 | orchestrator | 16:52:52.009 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-02 16:52:52.032166 | orchestrator | 16:52:52.031 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=dd4bab9d-0787-4709-bf4e-89aace2da140] 2025-06-02 16:52:57.731747 | orchestrator | 16:52:57.731 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:52:58.045702 | orchestrator | 16:52:58.045 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=d48cd23a-d630-46fc-9aaf-c7330a98b261] 2025-06-02 16:52:58.119072 | orchestrator | 16:52:58.118 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=d356181b-74ae-4ac7-bc31-89a6ea21a1ee] 2025-06-02 16:52:58.126948 | orchestrator | 16:52:58.126 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-02 16:53:01.922796 | orchestrator | 16:53:01.922 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-02 16:53:01.927000 | orchestrator | 16:53:01.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-02 16:53:01.933173 | orchestrator | 16:53:01.932 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-02 16:53:01.936449 | orchestrator | 16:53:01.936 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-02 16:53:01.941856 | orchestrator | 16:53:01.941 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-02 16:53:01.991502 | orchestrator | 16:53:01.991 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-02 16:53:02.268023 | orchestrator | 16:53:02.267 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=99761c60-bcd6-43ee-98a0-4756239a5a12] 2025-06-02 16:53:02.298175 | orchestrator | 16:53:02.297 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=8697a44b-eed5-41d0-9c8d-10255323f65d] 2025-06-02 16:53:02.326459 | orchestrator | 16:53:02.326 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=e83e2705-4f98-41ae-acf9-bfb494f15fd6] 2025-06-02 16:53:02.332542 | orchestrator | 16:53:02.332 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=2efc9266-ddfc-4e29-8616-f47e0c5d606f] 2025-06-02 16:53:02.349692 | orchestrator | 16:53:02.349 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3] 2025-06-02 16:53:02.352953 | orchestrator | 16:53:02.352 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=60870759-8a8b-4186-93b0-9dd809266b84] 2025-06-02 16:53:06.089832 | orchestrator | 16:53:06.089 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=36d7715e-b632-427f-909f-bec31d48af76] 2025-06-02 16:53:06.100255 | orchestrator | 16:53:06.100 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-02 16:53:06.101713 | orchestrator | 16:53:06.101 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-02 16:53:06.101795 | orchestrator | 16:53:06.101 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-02 16:53:06.337440 | orchestrator | 16:53:06.336 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9969a61a-c480-4f78-9ccb-f85199e64b94] 2025-06-02 16:53:06.348645 | orchestrator | 16:53:06.348 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-02 16:53:06.349197 | orchestrator | 16:53:06.349 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-02 16:53:06.350391 | orchestrator | 16:53:06.350 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-02 16:53:06.351070 | orchestrator | 16:53:06.350 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-02 16:53:06.362308 | orchestrator | 16:53:06.362 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-02 16:53:06.363122 | orchestrator | 16:53:06.363 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-02 16:53:06.363667 | orchestrator | 16:53:06.363 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-02 16:53:06.364703 | orchestrator | 16:53:06.364 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-02 16:53:06.405078 | orchestrator | 16:53:06.404 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=131dcb1d-bfb2-4512-bd2f-0fba0e07cfcb] 2025-06-02 16:53:06.420427 | orchestrator | 16:53:06.420 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-02 16:53:07.035362 | orchestrator | 16:53:07.034 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=8d87059b-f136-46ee-84b9-3102ad56a201] 2025-06-02 16:53:07.053290 | orchestrator | 16:53:07.053 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-02 16:53:07.180998 | orchestrator | 16:53:07.180 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=0fe746aa-25e9-4358-b052-5d3abc3ae069] 2025-06-02 16:53:07.188666 | orchestrator | 16:53:07.188 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-02 16:53:07.331692 | orchestrator | 16:53:07.331 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=216ef050-035b-4796-8099-f3d415ee7c59] 2025-06-02 16:53:07.339056 | orchestrator | 16:53:07.338 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-02 16:53:07.347291 | orchestrator | 16:53:07.346 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=e441b66a-9241-4a68-b3fb-05b777c12591] 2025-06-02 16:53:07.354611 | orchestrator | 16:53:07.354 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-02 16:53:07.482566 | orchestrator | 16:53:07.481 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=449efc38-ee89-4627-8c78-8772a350ccb7] 2025-06-02 16:53:07.490862 | orchestrator | 16:53:07.490 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-02 16:53:07.539105 | orchestrator | 16:53:07.538 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=193f391b-7413-4dd6-bda7-63ad1f388562] 2025-06-02 16:53:07.551315 | orchestrator | 16:53:07.550 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-02 16:53:07.675911 | orchestrator | 16:53:07.675 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=b5005c64-0be1-44b7-a076-744bf5c3d6a6] 2025-06-02 16:53:07.685183 | orchestrator | 16:53:07.684 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-02 16:53:07.811178 | orchestrator | 16:53:07.810 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=d62382f6-ccf3-4a37-80a9-70a6603349cf] 2025-06-02 16:53:07.957888 | orchestrator | 16:53:07.957 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=6901a368-7cea-4a58-b5cc-b8137cef99bb] 2025-06-02 16:53:11.980346 | orchestrator | 16:53:11.979 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=0cca1f76-0ae2-4eb8-b605-b312ad958570] 2025-06-02 16:53:11.982092 | orchestrator | 16:53:11.981 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=0d8fc326-6596-4617-a8e8-eebd7cb3377a] 2025-06-02 16:53:11.993029 | orchestrator | 16:53:11.992 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=d5cbd31d-7bcb-4d74-bc67-c1fba69aad74] 2025-06-02 16:53:12.025756 | orchestrator | 16:53:12.025 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=1834ff6c-b46c-4bd1-a7ec-2529f59aa6e4] 2025-06-02 16:53:12.125827 | orchestrator | 16:53:12.125 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=98950d27-ac12-452d-ac45-27e5ba8859b8] 2025-06-02 16:53:12.650943 | orchestrator | 16:53:12.650 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=3be1942e-6dcd-4957-affe-a4ac869a5626] 2025-06-02 16:53:13.046756 | orchestrator | 16:53:13.046 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 5s [id=7601aca0-2d62-4125-a775-04beeeec6bd6] 2025-06-02 16:53:13.631514 | orchestrator | 16:53:13.631 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=d7d7fbf2-be1a-4881-9027-48a713a456cd] 2025-06-02 16:53:13.653592 | orchestrator | 16:53:13.653 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-02 16:53:13.662065 | orchestrator | 16:53:13.661 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-02 16:53:13.662257 | orchestrator | 16:53:13.662 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-02 16:53:13.673664 | orchestrator | 16:53:13.673 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-02 16:53:13.677802 | orchestrator | 16:53:13.677 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-02 16:53:13.678590 | orchestrator | 16:53:13.678 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-02 16:53:13.683288 | orchestrator | 16:53:13.683 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-02 16:53:20.506337 | orchestrator | 16:53:20.505 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=6627f27d-85a3-4985-ba59-2865192f9e5c] 2025-06-02 16:53:20.514673 | orchestrator | 16:53:20.514 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-02 16:53:20.521723 | orchestrator | 16:53:20.521 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-02 16:53:20.523112 | orchestrator | 16:53:20.523 STDOUT terraform: local_file.inventory: Creating... 2025-06-02 16:53:20.527740 | orchestrator | 16:53:20.527 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=4116023079b37148b0950b4ea78bc9bfbd3a63a9] 2025-06-02 16:53:20.528635 | orchestrator | 16:53:20.528 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=a12b137fa0fb73afde356f8a0c7f8c16c7d3cd80] 2025-06-02 16:53:21.275767 | orchestrator | 16:53:21.275 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=6627f27d-85a3-4985-ba59-2865192f9e5c] 2025-06-02 16:53:23.663549 | orchestrator | 16:53:23.663 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-02 16:53:23.663674 | orchestrator | 16:53:23.663 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-02 16:53:23.675755 | orchestrator | 16:53:23.675 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-02 16:53:23.679033 | orchestrator | 16:53:23.678 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-02 16:53:23.679188 | orchestrator | 16:53:23.678 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-02 16:53:23.684257 | orchestrator | 16:53:23.684 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-02 16:53:33.664639 | orchestrator | 16:53:33.664 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-02 16:53:33.664790 | orchestrator | 16:53:33.664 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-02 16:53:33.676059 | orchestrator | 16:53:33.675 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-02 16:53:33.680232 | orchestrator | 16:53:33.680 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-02 16:53:33.680448 | orchestrator | 16:53:33.680 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-02 16:53:33.684477 | orchestrator | 16:53:33.684 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-02 16:53:43.666340 | orchestrator | 16:53:43.665 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-02 16:53:43.666665 | orchestrator | 16:53:43.666 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-02 16:53:43.676792 | orchestrator | 16:53:43.676 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-06-02 16:53:43.680910 | orchestrator | 16:53:43.680 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-06-02 16:53:43.681089 | orchestrator | 16:53:43.680 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-06-02 16:53:43.685640 | orchestrator | 16:53:43.685 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-06-02 16:53:44.156835 | orchestrator | 16:53:44.156 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=23be7dcf-da7c-446c-930f-c69545bc45be] 2025-06-02 16:53:44.210761 | orchestrator | 16:53:44.210 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=402bf727-3c7a-4edf-ba36-2a86a13820e9] 2025-06-02 16:53:53.670695 | orchestrator | 16:53:53.670 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-06-02 16:53:53.670827 | orchestrator | 16:53:53.670 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-06-02 16:53:53.677808 | orchestrator | 16:53:53.677 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-06-02 16:53:53.686548 | orchestrator | 16:53:53.686 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-06-02 16:53:54.259042 | orchestrator | 16:53:54.258 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 40s [id=b9b5727d-e0fc-412e-9bab-1f0597a0113f] 2025-06-02 16:53:54.308839 | orchestrator | 16:53:54.308 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 40s [id=7cf79e36-8561-429f-b91a-3a614bb76f78] 2025-06-02 16:53:54.343516 | orchestrator | 16:53:54.343 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=f3095bee-151d-41b2-84cd-4a3343cb69b1] 2025-06-02 16:53:54.575682 | orchestrator | 16:53:54.575 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 41s [id=ee73d863-1cdd-4f93-957b-f32782010e74] 2025-06-02 16:53:54.604387 | orchestrator | 16:53:54.604 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-02 16:53:54.608103 | orchestrator | 16:53:54.607 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=4117656957157231027] 2025-06-02 16:53:54.612074 | orchestrator | 16:53:54.611 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-02 16:53:54.618515 | orchestrator | 16:53:54.618 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-02 16:53:54.621257 | orchestrator | 16:53:54.621 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-02 16:53:54.622766 | orchestrator | 16:53:54.622 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-02 16:53:54.626493 | orchestrator | 16:53:54.626 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-02 16:53:54.628499 | orchestrator | 16:53:54.628 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-02 16:53:54.634816 | orchestrator | 16:53:54.634 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-02 16:53:54.642995 | orchestrator | 16:53:54.642 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-02 16:53:54.644868 | orchestrator | 16:53:54.644 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-02 16:53:54.657857 | orchestrator | 16:53:54.657 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-02 16:53:59.937308 | orchestrator | 16:53:59.936 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 5s [id=402bf727-3c7a-4edf-ba36-2a86a13820e9/075a40bb-072b-46c1-930e-3c0277237be4] 2025-06-02 16:53:59.951494 | orchestrator | 16:53:59.950 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=ee73d863-1cdd-4f93-957b-f32782010e74/53941cc3-a8ff-45b3-9c82-286f81867ab6] 2025-06-02 16:53:59.960866 | orchestrator | 16:53:59.960 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=7cf79e36-8561-429f-b91a-3a614bb76f78/c7f9d288-1a32-443d-a362-6ba679ef0f8f] 2025-06-02 16:53:59.979152 | orchestrator | 16:53:59.978 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=402bf727-3c7a-4edf-ba36-2a86a13820e9/cab884bf-6138-4574-8f5c-e044606bea62] 2025-06-02 16:53:59.991510 | orchestrator | 16:53:59.991 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 5s [id=ee73d863-1cdd-4f93-957b-f32782010e74/42dd6fc7-77c1-48dd-afcf-d774f79f6bbd] 2025-06-02 16:54:00.001192 | orchestrator | 16:54:00.000 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=7cf79e36-8561-429f-b91a-3a614bb76f78/dd4bab9d-0787-4709-bf4e-89aace2da140] 2025-06-02 16:54:00.030356 | orchestrator | 16:54:00.029 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=402bf727-3c7a-4edf-ba36-2a86a13820e9/7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3] 2025-06-02 16:54:00.055315 | orchestrator | 16:54:00.054 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=7cf79e36-8561-429f-b91a-3a614bb76f78/f446ae25-d9a7-444f-b563-a9cba680652a] 2025-06-02 16:54:00.241180 | orchestrator | 16:54:00.240 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=ee73d863-1cdd-4f93-957b-f32782010e74/4a588e14-c726-4684-ac8a-ec1bcbcaf53d] 2025-06-02 16:54:04.643800 | orchestrator | 16:54:04.643 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-02 16:54:14.645117 | orchestrator | 16:54:14.644 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-02 16:54:15.264897 | orchestrator | 16:54:15.264 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=c72054d0-03fa-4d04-a4d4-213d1a29c811] 2025-06-02 16:54:15.291640 | orchestrator | 16:54:15.291 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-02 16:54:15.291729 | orchestrator | 16:54:15.291 STDOUT terraform: Outputs: 2025-06-02 16:54:15.291750 | orchestrator | 16:54:15.291 STDOUT terraform: manager_address = 2025-06-02 16:54:15.291765 | orchestrator | 16:54:15.291 STDOUT terraform: private_key = 2025-06-02 16:54:15.388261 | orchestrator | ok: Runtime: 0:01:43.659276 2025-06-02 16:54:15.427372 | 2025-06-02 16:54:15.427664 | TASK [Create infrastructure (stable)] 2025-06-02 16:54:15.972508 | orchestrator | skipping: Conditional result was False 2025-06-02 16:54:15.981433 | 2025-06-02 16:54:15.981572 | TASK [Fetch manager address] 2025-06-02 16:54:16.455959 | orchestrator | ok 2025-06-02 16:54:16.465740 | 2025-06-02 16:54:16.465921 | TASK [Set manager_host address] 2025-06-02 16:54:16.547195 | orchestrator | ok 2025-06-02 16:54:16.557439 | 2025-06-02 16:54:16.557604 | LOOP [Update ansible collections] 2025-06-02 16:54:17.448139 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:54:17.448524 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 16:54:17.448591 | orchestrator | Starting galaxy collection install process 2025-06-02 16:54:17.448634 | orchestrator | Process install dependency map 2025-06-02 16:54:17.448671 | orchestrator | Starting collection install process 2025-06-02 16:54:17.449197 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-06-02 16:54:17.450424 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-06-02 16:54:17.450548 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-02 16:54:17.450658 | orchestrator | ok: Item: commons Runtime: 0:00:00.558613 2025-06-02 16:54:18.346913 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:54:18.347125 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-02 16:54:18.347203 | orchestrator | Starting galaxy collection install process 2025-06-02 16:54:18.347263 | orchestrator | Process install dependency map 2025-06-02 16:54:18.347323 | orchestrator | Starting collection install process 2025-06-02 16:54:18.347382 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-06-02 16:54:18.347436 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-06-02 16:54:18.347531 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-02 16:54:18.347612 | orchestrator | ok: Item: services Runtime: 0:00:00.600766 2025-06-02 16:54:18.368461 | 2025-06-02 16:54:18.368629 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 16:54:28.950478 | orchestrator | ok 2025-06-02 16:54:28.961245 | 2025-06-02 16:54:28.961368 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 16:55:29.012144 | orchestrator | ok 2025-06-02 16:55:29.022995 | 2025-06-02 16:55:29.023150 | TASK [Fetch manager ssh hostkey] 2025-06-02 16:55:30.604964 | orchestrator | Output suppressed because no_log was given 2025-06-02 16:55:30.619085 | 2025-06-02 16:55:30.619262 | TASK [Get ssh keypair from terraform environment] 2025-06-02 16:55:31.153356 | orchestrator | ok: Runtime: 0:00:00.010135 2025-06-02 16:55:31.170764 | 2025-06-02 16:55:31.171034 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 16:55:31.208017 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-02 16:55:31.219808 | 2025-06-02 16:55:31.219991 | TASK [Run manager part 0] 2025-06-02 16:55:32.499983 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 16:55:32.587105 | orchestrator | 2025-06-02 16:55:32.587230 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-02 16:55:32.587251 | orchestrator | 2025-06-02 16:55:32.587286 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-02 16:55:34.592861 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:34.593021 | orchestrator | 2025-06-02 16:55:34.593090 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 16:55:34.593117 | orchestrator | 2025-06-02 16:55:34.593139 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 16:55:36.680089 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:36.680162 | orchestrator | 2025-06-02 16:55:36.680170 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 16:55:37.424447 | orchestrator | ok: [testbed-manager] 2025-06-02 16:55:37.424587 | orchestrator | 2025-06-02 16:55:37.424597 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 16:55:37.464688 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.464745 | orchestrator | 2025-06-02 16:55:37.464754 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-02 16:55:37.493046 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.493132 | orchestrator | 2025-06-02 16:55:37.493146 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 16:55:37.529533 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.529616 | orchestrator | 2025-06-02 16:55:37.529629 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 16:55:37.570585 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.570640 | orchestrator | 2025-06-02 16:55:37.570648 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 16:55:37.613512 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.613572 | orchestrator | 2025-06-02 16:55:37.613583 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-02 16:55:37.654432 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.654606 | orchestrator | 2025-06-02 16:55:37.654647 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-02 16:55:37.696210 | orchestrator | skipping: [testbed-manager] 2025-06-02 16:55:37.696283 | orchestrator | 2025-06-02 16:55:37.696440 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-02 16:55:38.527066 | orchestrator | changed: [testbed-manager] 2025-06-02 16:55:38.527119 | orchestrator | 2025-06-02 16:55:38.527127 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-02 16:59:00.029235 | orchestrator | changed: [testbed-manager] 2025-06-02 16:59:00.029300 | orchestrator | 2025-06-02 16:59:00.029314 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 17:00:19.623815 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:19.623863 | orchestrator | 2025-06-02 17:00:19.623872 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-02 17:00:43.438794 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:43.438900 | orchestrator | 2025-06-02 17:00:43.438920 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-02 17:00:52.808808 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:52.808917 | orchestrator | 2025-06-02 17:00:52.808935 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 17:00:52.858640 | orchestrator | ok: [testbed-manager] 2025-06-02 17:00:52.858724 | orchestrator | 2025-06-02 17:00:52.858739 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-02 17:00:53.688795 | orchestrator | ok: [testbed-manager] 2025-06-02 17:00:53.688882 | orchestrator | 2025-06-02 17:00:53.688899 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-02 17:00:54.448339 | orchestrator | changed: [testbed-manager] 2025-06-02 17:00:54.448423 | orchestrator | 2025-06-02 17:00:54.448440 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-02 17:01:01.118149 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:01.118254 | orchestrator | 2025-06-02 17:01:01.118292 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-02 17:01:07.465030 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:07.465088 | orchestrator | 2025-06-02 17:01:07.465099 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-02 17:01:10.341752 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:10.341803 | orchestrator | 2025-06-02 17:01:10.341811 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-02 17:01:12.173881 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:12.173987 | orchestrator | 2025-06-02 17:01:12.174006 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-02 17:01:13.323777 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 17:01:13.323887 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 17:01:13.323901 | orchestrator | 2025-06-02 17:01:13.323912 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-02 17:01:13.369071 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 17:01:13.369193 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 17:01:13.369217 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 17:01:13.369237 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 17:01:19.338729 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-02 17:01:19.338787 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-02 17:01:19.338795 | orchestrator | 2025-06-02 17:01:19.338802 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-02 17:01:19.917443 | orchestrator | changed: [testbed-manager] 2025-06-02 17:01:19.917566 | orchestrator | 2025-06-02 17:01:19.917583 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-02 17:02:48.645878 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-02 17:02:48.645934 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-02 17:02:48.645945 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-02 17:02:48.645953 | orchestrator | 2025-06-02 17:02:48.645961 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-02 17:02:51.068088 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-02 17:02:51.068125 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-02 17:02:51.068130 | orchestrator | 2025-06-02 17:02:51.068135 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-02 17:02:51.068140 | orchestrator | 2025-06-02 17:02:51.068144 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:02:52.563700 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:52.563736 | orchestrator | 2025-06-02 17:02:52.563743 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 17:02:52.605228 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:52.605313 | orchestrator | 2025-06-02 17:02:52.605322 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 17:02:52.673982 | orchestrator | ok: [testbed-manager] 2025-06-02 17:02:52.674097 | orchestrator | 2025-06-02 17:02:52.674114 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 17:02:53.542132 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:53.542219 | orchestrator | 2025-06-02 17:02:53.542235 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 17:02:54.332358 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:54.332402 | orchestrator | 2025-06-02 17:02:54.332410 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 17:02:55.805650 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-02 17:02:55.805691 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-02 17:02:55.805698 | orchestrator | 2025-06-02 17:02:55.805714 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 17:02:57.262345 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:57.262469 | orchestrator | 2025-06-02 17:02:57.262488 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 17:02:59.062473 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:02:59.062515 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-02 17:02:59.062521 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:02:59.062527 | orchestrator | 2025-06-02 17:02:59.062533 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 17:02:59.625466 | orchestrator | changed: [testbed-manager] 2025-06-02 17:02:59.625509 | orchestrator | 2025-06-02 17:02:59.625516 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 17:02:59.698764 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:02:59.698806 | orchestrator | 2025-06-02 17:02:59.698815 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 17:03:00.601350 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:03:00.601388 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:00.601395 | orchestrator | 2025-06-02 17:03:00.601402 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 17:03:00.636813 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:00.636845 | orchestrator | 2025-06-02 17:03:00.636852 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 17:03:00.669613 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:00.669648 | orchestrator | 2025-06-02 17:03:00.669655 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 17:03:00.697517 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:00.697548 | orchestrator | 2025-06-02 17:03:00.697553 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 17:03:00.739246 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:00.739338 | orchestrator | 2025-06-02 17:03:00.739345 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 17:03:01.501570 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:01.502180 | orchestrator | 2025-06-02 17:03:01.502195 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-02 17:03:01.502200 | orchestrator | 2025-06-02 17:03:01.502206 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:03:02.910850 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:02.910884 | orchestrator | 2025-06-02 17:03:02.910890 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-02 17:03:03.866870 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:03.866908 | orchestrator | 2025-06-02 17:03:03.866915 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:03:03.866921 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 17:03:03.866926 | orchestrator | 2025-06-02 17:03:04.037312 | orchestrator | ok: Runtime: 0:07:32.440903 2025-06-02 17:03:04.053470 | 2025-06-02 17:03:04.053671 | TASK [Point out that the log in on the manager is now possible] 2025-06-02 17:03:04.092101 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-02 17:03:04.102006 | 2025-06-02 17:03:04.102137 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-02 17:03:04.150469 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-02 17:03:04.159562 | 2025-06-02 17:03:04.159696 | TASK [Run manager part 1 + 2] 2025-06-02 17:03:05.159754 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-02 17:03:05.226159 | orchestrator | 2025-06-02 17:03:05.226459 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-02 17:03:05.226483 | orchestrator | 2025-06-02 17:03:05.226513 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:03:08.353004 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:08.353060 | orchestrator | 2025-06-02 17:03:08.353082 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-02 17:03:08.387765 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:08.387812 | orchestrator | 2025-06-02 17:03:08.387820 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-02 17:03:08.426679 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:08.426729 | orchestrator | 2025-06-02 17:03:08.426741 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:03:08.457953 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:08.457996 | orchestrator | 2025-06-02 17:03:08.458002 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:03:08.524087 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:08.524133 | orchestrator | 2025-06-02 17:03:08.524140 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:03:08.599752 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:08.599794 | orchestrator | 2025-06-02 17:03:08.599801 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:03:08.639296 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-02 17:03:08.639343 | orchestrator | 2025-06-02 17:03:08.639349 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:03:09.346813 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:09.346905 | orchestrator | 2025-06-02 17:03:09.346922 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:03:09.398224 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:09.398352 | orchestrator | 2025-06-02 17:03:09.398368 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:03:10.780083 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:10.780189 | orchestrator | 2025-06-02 17:03:10.780209 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:03:11.383190 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:11.383299 | orchestrator | 2025-06-02 17:03:11.383316 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:03:12.581372 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:12.581450 | orchestrator | 2025-06-02 17:03:12.581468 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:03:25.804935 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:25.804979 | orchestrator | 2025-06-02 17:03:25.804986 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-02 17:03:26.515831 | orchestrator | ok: [testbed-manager] 2025-06-02 17:03:26.515917 | orchestrator | 2025-06-02 17:03:26.515935 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-02 17:03:26.572922 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:26.572993 | orchestrator | 2025-06-02 17:03:26.573007 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-02 17:03:27.571190 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:27.571302 | orchestrator | 2025-06-02 17:03:27.571320 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-02 17:03:28.559143 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:28.559283 | orchestrator | 2025-06-02 17:03:28.559305 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-02 17:03:29.141058 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:29.141122 | orchestrator | 2025-06-02 17:03:29.141131 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-02 17:03:29.185278 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-02 17:03:29.185379 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-02 17:03:29.185395 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-02 17:03:29.185407 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-02 17:03:31.205243 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:31.205360 | orchestrator | 2025-06-02 17:03:31.205379 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-02 17:03:40.569710 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-02 17:03:40.569763 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-02 17:03:40.569773 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-02 17:03:40.569781 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-02 17:03:40.569792 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-02 17:03:40.569799 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-02 17:03:40.569805 | orchestrator | 2025-06-02 17:03:40.569812 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-02 17:03:41.652776 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:41.652858 | orchestrator | 2025-06-02 17:03:41.652873 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-02 17:03:41.697031 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:41.697097 | orchestrator | 2025-06-02 17:03:41.697107 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-02 17:03:45.023237 | orchestrator | changed: [testbed-manager] 2025-06-02 17:03:45.023368 | orchestrator | 2025-06-02 17:03:45.023385 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-02 17:03:45.066198 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:03:45.066316 | orchestrator | 2025-06-02 17:03:45.066333 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-02 17:05:25.326498 | orchestrator | changed: [testbed-manager] 2025-06-02 17:05:25.326605 | orchestrator | 2025-06-02 17:05:25.326637 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:05:26.539810 | orchestrator | ok: [testbed-manager] 2025-06-02 17:05:26.539850 | orchestrator | 2025-06-02 17:05:26.539857 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:05:26.539864 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-02 17:05:26.539870 | orchestrator | 2025-06-02 17:05:26.785018 | orchestrator | ok: Runtime: 0:02:22.188755 2025-06-02 17:05:26.802212 | 2025-06-02 17:05:26.802362 | TASK [Reboot manager] 2025-06-02 17:05:28.337437 | orchestrator | ok: Runtime: 0:00:00.963764 2025-06-02 17:05:28.346609 | 2025-06-02 17:05:28.346801 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-02 17:05:44.678351 | orchestrator | ok 2025-06-02 17:05:44.691066 | 2025-06-02 17:05:44.691361 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-02 17:06:44.734282 | orchestrator | ok 2025-06-02 17:06:44.741456 | 2025-06-02 17:06:44.741571 | TASK [Deploy manager + bootstrap nodes] 2025-06-02 17:06:47.530092 | orchestrator | 2025-06-02 17:06:47.530327 | orchestrator | # DEPLOY MANAGER 2025-06-02 17:06:47.530356 | orchestrator | 2025-06-02 17:06:47.530371 | orchestrator | + set -e 2025-06-02 17:06:47.530385 | orchestrator | + echo 2025-06-02 17:06:47.530400 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-02 17:06:47.530418 | orchestrator | + echo 2025-06-02 17:06:47.530468 | orchestrator | + cat /opt/manager-vars.sh 2025-06-02 17:06:47.534535 | orchestrator | export NUMBER_OF_NODES=6 2025-06-02 17:06:47.534563 | orchestrator | 2025-06-02 17:06:47.534576 | orchestrator | export CEPH_VERSION=reef 2025-06-02 17:06:47.534589 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-02 17:06:47.534602 | orchestrator | export MANAGER_VERSION=latest 2025-06-02 17:06:47.534624 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:47.534636 | orchestrator | 2025-06-02 17:06:47.534654 | orchestrator | export ARA=false 2025-06-02 17:06:47.534665 | orchestrator | export DEPLOY_MODE=manager 2025-06-02 17:06:47.534683 | orchestrator | export TEMPEST=false 2025-06-02 17:06:47.534695 | orchestrator | export IS_ZUUL=true 2025-06-02 17:06:47.534706 | orchestrator | 2025-06-02 17:06:47.534726 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:06:47.534738 | orchestrator | export EXTERNAL_API=false 2025-06-02 17:06:47.534749 | orchestrator | 2025-06-02 17:06:47.534760 | orchestrator | export IMAGE_USER=ubuntu 2025-06-02 17:06:47.534775 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:47.534786 | orchestrator | 2025-06-02 17:06:47.534797 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-02 17:06:47.534965 | orchestrator | 2025-06-02 17:06:47.534986 | orchestrator | + echo 2025-06-02 17:06:47.535004 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:06:47.536471 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:06:47.536490 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:06:47.536521 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:06:47.536535 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:06:47.536551 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:06:47.536597 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:06:47.536610 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:06:47.536621 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:06:47.536632 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:06:47.536643 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:06:47.536655 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:06:47.536921 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 17:06:47.537028 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 17:06:47.537056 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:47.537096 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:06:47.537115 | orchestrator | ++ export ARA=false 2025-06-02 17:06:47.537135 | orchestrator | ++ ARA=false 2025-06-02 17:06:47.537154 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:06:47.537170 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:06:47.537181 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:06:47.537211 | orchestrator | ++ TEMPEST=false 2025-06-02 17:06:47.537223 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:06:47.537234 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:06:47.537287 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:06:47.537300 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:06:47.537312 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:06:47.537323 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:06:47.537333 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:06:47.537344 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:06:47.537355 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:47.537366 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:47.537377 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:06:47.537388 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:06:47.537400 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-02 17:06:47.603376 | orchestrator | + docker version 2025-06-02 17:06:47.908408 | orchestrator | Client: Docker Engine - Community 2025-06-02 17:06:47.908510 | orchestrator | Version: 27.5.1 2025-06-02 17:06:47.908527 | orchestrator | API version: 1.47 2025-06-02 17:06:47.908538 | orchestrator | Go version: go1.22.11 2025-06-02 17:06:47.908549 | orchestrator | Git commit: 9f9e405 2025-06-02 17:06:47.908560 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 17:06:47.908573 | orchestrator | OS/Arch: linux/amd64 2025-06-02 17:06:47.908584 | orchestrator | Context: default 2025-06-02 17:06:47.908595 | orchestrator | 2025-06-02 17:06:47.908607 | orchestrator | Server: Docker Engine - Community 2025-06-02 17:06:47.908619 | orchestrator | Engine: 2025-06-02 17:06:47.908631 | orchestrator | Version: 27.5.1 2025-06-02 17:06:47.908642 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-02 17:06:47.908684 | orchestrator | Go version: go1.22.11 2025-06-02 17:06:47.908696 | orchestrator | Git commit: 4c9b3b0 2025-06-02 17:06:47.908708 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-02 17:06:47.908719 | orchestrator | OS/Arch: linux/amd64 2025-06-02 17:06:47.908730 | orchestrator | Experimental: false 2025-06-02 17:06:47.908741 | orchestrator | containerd: 2025-06-02 17:06:47.908752 | orchestrator | Version: 1.7.27 2025-06-02 17:06:47.908779 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-02 17:06:47.908791 | orchestrator | runc: 2025-06-02 17:06:47.908803 | orchestrator | Version: 1.2.5 2025-06-02 17:06:47.908814 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-02 17:06:47.908825 | orchestrator | docker-init: 2025-06-02 17:06:47.908836 | orchestrator | Version: 0.19.0 2025-06-02 17:06:47.908848 | orchestrator | GitCommit: de40ad0 2025-06-02 17:06:47.913160 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-02 17:06:47.922709 | orchestrator | + set -e 2025-06-02 17:06:47.922755 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:06:47.922776 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:06:47.922795 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:06:47.922814 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:06:47.922832 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:06:47.922851 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:06:47.922864 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:06:47.922875 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 17:06:47.922886 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 17:06:47.922897 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:06:47.922907 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:06:47.922918 | orchestrator | ++ export ARA=false 2025-06-02 17:06:47.922929 | orchestrator | ++ ARA=false 2025-06-02 17:06:47.922940 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:06:47.922951 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:06:47.922962 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:06:47.922972 | orchestrator | ++ TEMPEST=false 2025-06-02 17:06:47.922983 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:06:47.922994 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:06:47.923005 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:06:47.923016 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:06:47.923026 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:06:47.923037 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:06:47.923047 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:06:47.923058 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:06:47.923069 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:47.923080 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:06:47.923105 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:06:47.923117 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:06:47.923139 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:06:47.923150 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:06:47.923161 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:06:47.923172 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:06:47.923195 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:06:47.923217 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 17:06:47.923228 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 17:06:47.923239 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-06-02 17:06:47.931096 | orchestrator | + set -e 2025-06-02 17:06:47.931136 | orchestrator | + VERSION=reef 2025-06-02 17:06:47.932594 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:47.938787 | orchestrator | + [[ -n ceph_version: reef ]] 2025-06-02 17:06:47.938813 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:47.945103 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-06-02 17:06:47.951523 | orchestrator | + set -e 2025-06-02 17:06:47.951988 | orchestrator | + VERSION=2024.2 2025-06-02 17:06:47.952542 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:47.957012 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-06-02 17:06:47.957040 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-06-02 17:06:47.962787 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-02 17:06:47.963548 | orchestrator | ++ semver latest 7.0.0 2025-06-02 17:06:48.028638 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 17:06:48.028739 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 17:06:48.028754 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-02 17:06:48.028768 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-02 17:06:48.073757 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:06:48.076663 | orchestrator | + source /opt/venv/bin/activate 2025-06-02 17:06:48.077884 | orchestrator | ++ deactivate nondestructive 2025-06-02 17:06:48.077976 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:48.077990 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:48.078003 | orchestrator | ++ hash -r 2025-06-02 17:06:48.078066 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:48.078080 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-02 17:06:48.078091 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-02 17:06:48.078102 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-02 17:06:48.078127 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-02 17:06:48.078141 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-02 17:06:48.078152 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:48.078164 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-02 17:06:48.078176 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:48.078235 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:06:48.078290 | orchestrator | ++ export PATH 2025-06-02 17:06:48.078303 | orchestrator | ++ '[' -n '' ']' 2025-06-02 17:06:48.078314 | orchestrator | ++ '[' -z '' ']' 2025-06-02 17:06:48.078325 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-02 17:06:48.078336 | orchestrator | ++ PS1='(venv) ' 2025-06-02 17:06:48.078352 | orchestrator | ++ export PS1 2025-06-02 17:06:48.078364 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-02 17:06:48.078374 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-02 17:06:48.078484 | orchestrator | ++ hash -r 2025-06-02 17:06:48.078520 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-02 17:06:49.537769 | orchestrator | 2025-06-02 17:06:49.537877 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-02 17:06:49.537893 | orchestrator | 2025-06-02 17:06:49.537906 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:06:50.148681 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:50.148795 | orchestrator | 2025-06-02 17:06:50.148811 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 17:06:51.252193 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:51.252321 | orchestrator | 2025-06-02 17:06:51.252338 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-02 17:06:51.252352 | orchestrator | 2025-06-02 17:06:51.252365 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:06:53.889516 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:53.889632 | orchestrator | 2025-06-02 17:06:53.889650 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-02 17:06:53.951157 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:53.951294 | orchestrator | 2025-06-02 17:06:53.951315 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-02 17:06:54.442723 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:54.442831 | orchestrator | 2025-06-02 17:06:54.442849 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-02 17:06:54.472681 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:54.472760 | orchestrator | 2025-06-02 17:06:54.472775 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-02 17:06:54.815975 | orchestrator | changed: [testbed-manager] 2025-06-02 17:06:54.816083 | orchestrator | 2025-06-02 17:06:54.816115 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-02 17:06:54.879729 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:54.879820 | orchestrator | 2025-06-02 17:06:54.879834 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-02 17:06:55.194235 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:55.194393 | orchestrator | 2025-06-02 17:06:55.194415 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-02 17:06:55.294666 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:06:55.294745 | orchestrator | 2025-06-02 17:06:55.294761 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-02 17:06:55.294774 | orchestrator | 2025-06-02 17:06:55.294789 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:06:57.088861 | orchestrator | ok: [testbed-manager] 2025-06-02 17:06:57.088981 | orchestrator | 2025-06-02 17:06:57.088999 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-02 17:06:57.215595 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-02 17:06:57.215689 | orchestrator | 2025-06-02 17:06:57.215705 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-02 17:06:57.268479 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-02 17:06:57.268553 | orchestrator | 2025-06-02 17:06:57.268567 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-02 17:06:58.413581 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-02 17:06:58.413688 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-02 17:06:58.413704 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-02 17:06:58.413717 | orchestrator | 2025-06-02 17:06:58.413731 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-02 17:07:00.306979 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-02 17:07:00.307089 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-02 17:07:00.307108 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-02 17:07:00.307121 | orchestrator | 2025-06-02 17:07:00.307134 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-02 17:07:01.035603 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:07:01.035712 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:01.035729 | orchestrator | 2025-06-02 17:07:01.035743 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-02 17:07:01.729644 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:07:01.729734 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:01.729751 | orchestrator | 2025-06-02 17:07:01.729764 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-02 17:07:01.799627 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:01.799713 | orchestrator | 2025-06-02 17:07:01.799727 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-02 17:07:02.197457 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:02.197553 | orchestrator | 2025-06-02 17:07:02.197570 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-02 17:07:02.287432 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-02 17:07:02.287537 | orchestrator | 2025-06-02 17:07:02.287552 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-02 17:07:03.471687 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:03.471792 | orchestrator | 2025-06-02 17:07:03.471809 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-02 17:07:04.379293 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:04.379402 | orchestrator | 2025-06-02 17:07:04.379419 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-02 17:07:16.990316 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:16.990435 | orchestrator | 2025-06-02 17:07:16.990453 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-02 17:07:17.039842 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:17.039931 | orchestrator | 2025-06-02 17:07:17.039945 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-02 17:07:17.039957 | orchestrator | 2025-06-02 17:07:17.039968 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:07:19.023111 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:19.023221 | orchestrator | 2025-06-02 17:07:19.023327 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-02 17:07:19.123002 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-02 17:07:19.123103 | orchestrator | 2025-06-02 17:07:19.123118 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-02 17:07:19.190848 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:07:19.190931 | orchestrator | 2025-06-02 17:07:19.190945 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-02 17:07:22.102322 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:22.102431 | orchestrator | 2025-06-02 17:07:22.102447 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-02 17:07:22.159487 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:22.159542 | orchestrator | 2025-06-02 17:07:22.159558 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-02 17:07:22.301556 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-02 17:07:22.301662 | orchestrator | 2025-06-02 17:07:22.301677 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-02 17:07:25.406500 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-02 17:07:25.406606 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-02 17:07:25.406620 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-02 17:07:25.406632 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-02 17:07:25.406644 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-02 17:07:25.406655 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-02 17:07:25.406666 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-02 17:07:25.406677 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-02 17:07:25.406688 | orchestrator | 2025-06-02 17:07:25.406701 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-02 17:07:26.087578 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:26.087666 | orchestrator | 2025-06-02 17:07:26.087681 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-02 17:07:26.773676 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:26.773778 | orchestrator | 2025-06-02 17:07:26.773794 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-02 17:07:26.841337 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-02 17:07:26.841426 | orchestrator | 2025-06-02 17:07:26.841440 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-02 17:07:28.121458 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-02 17:07:28.121567 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-02 17:07:28.121581 | orchestrator | 2025-06-02 17:07:28.121595 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-02 17:07:28.795235 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:28.795393 | orchestrator | 2025-06-02 17:07:28.795409 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-02 17:07:28.853431 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:28.853522 | orchestrator | 2025-06-02 17:07:28.853539 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-02 17:07:28.929040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-02 17:07:28.929113 | orchestrator | 2025-06-02 17:07:28.929129 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-02 17:07:30.401512 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:07:30.401609 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:07:30.401623 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:30.401636 | orchestrator | 2025-06-02 17:07:30.401649 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-02 17:07:31.236322 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:31.236401 | orchestrator | 2025-06-02 17:07:31.236417 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-02 17:07:31.298216 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:31.298337 | orchestrator | 2025-06-02 17:07:31.298352 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-02 17:07:31.415420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-02 17:07:31.415525 | orchestrator | 2025-06-02 17:07:31.415541 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-02 17:07:31.973955 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:31.974181 | orchestrator | 2025-06-02 17:07:31.974203 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-02 17:07:32.446331 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:32.446450 | orchestrator | 2025-06-02 17:07:32.446469 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-02 17:07:33.776094 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-02 17:07:33.776190 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-02 17:07:33.776204 | orchestrator | 2025-06-02 17:07:33.776217 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-02 17:07:34.480214 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:34.480360 | orchestrator | 2025-06-02 17:07:34.480377 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-02 17:07:34.928954 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:34.929057 | orchestrator | 2025-06-02 17:07:34.929074 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-02 17:07:35.321717 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:35.321803 | orchestrator | 2025-06-02 17:07:35.321815 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-02 17:07:35.373372 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:35.373450 | orchestrator | 2025-06-02 17:07:35.373465 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-02 17:07:35.448541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-02 17:07:35.448619 | orchestrator | 2025-06-02 17:07:35.448631 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-02 17:07:35.504521 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:35.504593 | orchestrator | 2025-06-02 17:07:35.504605 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-02 17:07:37.680292 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-02 17:07:37.680395 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-02 17:07:37.680411 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-02 17:07:37.680424 | orchestrator | 2025-06-02 17:07:37.680437 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-02 17:07:38.443771 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:38.443875 | orchestrator | 2025-06-02 17:07:38.443891 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-02 17:07:39.202769 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:39.202845 | orchestrator | 2025-06-02 17:07:39.202861 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-02 17:07:39.974193 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:39.974338 | orchestrator | 2025-06-02 17:07:39.974355 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-02 17:07:40.059423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-02 17:07:40.059517 | orchestrator | 2025-06-02 17:07:40.059532 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-02 17:07:40.104769 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:40.104832 | orchestrator | 2025-06-02 17:07:40.104845 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-02 17:07:40.835091 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-02 17:07:40.835160 | orchestrator | 2025-06-02 17:07:40.835175 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-02 17:07:40.928670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-02 17:07:40.928760 | orchestrator | 2025-06-02 17:07:40.928774 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-02 17:07:41.689978 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:41.690120 | orchestrator | 2025-06-02 17:07:41.690135 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-02 17:07:42.344185 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:42.344323 | orchestrator | 2025-06-02 17:07:42.344342 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-02 17:07:42.393993 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:07:42.394100 | orchestrator | 2025-06-02 17:07:42.394117 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-02 17:07:42.455155 | orchestrator | ok: [testbed-manager] 2025-06-02 17:07:42.455227 | orchestrator | 2025-06-02 17:07:42.455247 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-02 17:07:43.300487 | orchestrator | changed: [testbed-manager] 2025-06-02 17:07:43.300583 | orchestrator | 2025-06-02 17:07:43.300598 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-02 17:08:51.914439 | orchestrator | changed: [testbed-manager] 2025-06-02 17:08:51.914556 | orchestrator | 2025-06-02 17:08:51.914572 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-02 17:08:52.820396 | orchestrator | ok: [testbed-manager] 2025-06-02 17:08:52.820508 | orchestrator | 2025-06-02 17:08:52.820525 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-02 17:08:52.876931 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:08:52.877010 | orchestrator | 2025-06-02 17:08:52.877027 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-02 17:08:55.671770 | orchestrator | changed: [testbed-manager] 2025-06-02 17:08:55.671879 | orchestrator | 2025-06-02 17:08:55.671899 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-02 17:08:55.729814 | orchestrator | ok: [testbed-manager] 2025-06-02 17:08:55.729910 | orchestrator | 2025-06-02 17:08:55.729925 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 17:08:55.729938 | orchestrator | 2025-06-02 17:08:55.729950 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-02 17:08:55.786388 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:08:55.786494 | orchestrator | 2025-06-02 17:08:55.786518 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-02 17:09:55.845013 | orchestrator | Pausing for 60 seconds 2025-06-02 17:09:55.845135 | orchestrator | changed: [testbed-manager] 2025-06-02 17:09:55.845152 | orchestrator | 2025-06-02 17:09:55.845166 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-02 17:10:00.625316 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:00.625429 | orchestrator | 2025-06-02 17:10:00.625447 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-02 17:10:42.458631 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-02 17:10:42.458743 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-02 17:10:42.458761 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:42.458775 | orchestrator | 2025-06-02 17:10:42.458788 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-02 17:10:52.532076 | orchestrator | changed: [testbed-manager] 2025-06-02 17:10:52.532192 | orchestrator | 2025-06-02 17:10:52.532208 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-02 17:10:52.619439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-02 17:10:52.619585 | orchestrator | 2025-06-02 17:10:52.619602 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-02 17:10:52.619614 | orchestrator | 2025-06-02 17:10:52.619626 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-02 17:10:52.673718 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:10:52.673820 | orchestrator | 2025-06-02 17:10:52.673837 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:10:52.673851 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-02 17:10:52.673863 | orchestrator | 2025-06-02 17:10:52.787712 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-02 17:10:52.787789 | orchestrator | + deactivate 2025-06-02 17:10:52.787804 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-02 17:10:52.787817 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-02 17:10:52.787828 | orchestrator | + export PATH 2025-06-02 17:10:52.787839 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-02 17:10:52.787851 | orchestrator | + '[' -n '' ']' 2025-06-02 17:10:52.787862 | orchestrator | + hash -r 2025-06-02 17:10:52.787872 | orchestrator | + '[' -n '' ']' 2025-06-02 17:10:52.787883 | orchestrator | + unset VIRTUAL_ENV 2025-06-02 17:10:52.787894 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-02 17:10:52.787929 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-02 17:10:52.787941 | orchestrator | + unset -f deactivate 2025-06-02 17:10:52.787954 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-02 17:10:52.794805 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 17:10:52.794830 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 17:10:52.794842 | orchestrator | + local max_attempts=60 2025-06-02 17:10:52.794906 | orchestrator | + local name=ceph-ansible 2025-06-02 17:10:52.794921 | orchestrator | + local attempt_num=1 2025-06-02 17:10:52.795403 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:10:52.834822 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:52.834865 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 17:10:52.834877 | orchestrator | + local max_attempts=60 2025-06-02 17:10:52.834888 | orchestrator | + local name=kolla-ansible 2025-06-02 17:10:52.834899 | orchestrator | + local attempt_num=1 2025-06-02 17:10:52.835963 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 17:10:52.876896 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:52.876965 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 17:10:52.876988 | orchestrator | + local max_attempts=60 2025-06-02 17:10:52.877008 | orchestrator | + local name=osism-ansible 2025-06-02 17:10:52.877027 | orchestrator | + local attempt_num=1 2025-06-02 17:10:52.877442 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 17:10:52.911838 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:10:52.911886 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 17:10:52.911899 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 17:10:53.634841 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-02 17:10:53.863239 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-02 17:10:53.863362 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863380 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863393 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-02 17:10:53.863407 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-02 17:10:53.863452 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863464 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863475 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-06-02 17:10:53.863486 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863497 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-02 17:10:53.863508 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863518 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-02 17:10:53.863529 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863539 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863550 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.863561 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-02 17:10:53.872202 | orchestrator | ++ semver latest 7.0.0 2025-06-02 17:10:53.934623 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 17:10:53.934692 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 17:10:53.934707 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-02 17:10:53.940273 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-02 17:10:55.743778 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:10:55.743854 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:10:55.743868 | orchestrator | Registering Redlock._release_script 2025-06-02 17:10:55.942163 | orchestrator | 2025-06-02 17:10:55 | INFO  | Task 2590b23e-0296-42d7-b667-0e6e857854d4 (resolvconf) was prepared for execution. 2025-06-02 17:10:55.942254 | orchestrator | 2025-06-02 17:10:55 | INFO  | It takes a moment until task 2590b23e-0296-42d7-b667-0e6e857854d4 (resolvconf) has been started and output is visible here. 2025-06-02 17:11:00.190561 | orchestrator | 2025-06-02 17:11:00.190670 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-02 17:11:00.191320 | orchestrator | 2025-06-02 17:11:00.192414 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:11:00.193220 | orchestrator | Monday 02 June 2025 17:11:00 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-06-02 17:11:04.226799 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:04.226897 | orchestrator | 2025-06-02 17:11:04.227412 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 17:11:04.228166 | orchestrator | Monday 02 June 2025 17:11:04 +0000 (0:00:04.040) 0:00:04.200 *********** 2025-06-02 17:11:04.289740 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:04.290423 | orchestrator | 2025-06-02 17:11:04.291212 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 17:11:04.291615 | orchestrator | Monday 02 June 2025 17:11:04 +0000 (0:00:00.064) 0:00:04.264 *********** 2025-06-02 17:11:04.372610 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-02 17:11:04.373540 | orchestrator | 2025-06-02 17:11:04.375107 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 17:11:04.376129 | orchestrator | Monday 02 June 2025 17:11:04 +0000 (0:00:00.082) 0:00:04.346 *********** 2025-06-02 17:11:04.472973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:11:04.473977 | orchestrator | 2025-06-02 17:11:04.474925 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 17:11:04.475782 | orchestrator | Monday 02 June 2025 17:11:04 +0000 (0:00:00.098) 0:00:04.445 *********** 2025-06-02 17:11:05.612175 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:05.613196 | orchestrator | 2025-06-02 17:11:05.614229 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 17:11:05.614937 | orchestrator | Monday 02 June 2025 17:11:05 +0000 (0:00:01.140) 0:00:05.585 *********** 2025-06-02 17:11:05.679938 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:05.680601 | orchestrator | 2025-06-02 17:11:05.682062 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 17:11:05.682806 | orchestrator | Monday 02 June 2025 17:11:05 +0000 (0:00:00.067) 0:00:05.653 *********** 2025-06-02 17:11:06.181851 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:06.182780 | orchestrator | 2025-06-02 17:11:06.183450 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 17:11:06.184051 | orchestrator | Monday 02 June 2025 17:11:06 +0000 (0:00:00.501) 0:00:06.155 *********** 2025-06-02 17:11:06.260802 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:06.261043 | orchestrator | 2025-06-02 17:11:06.263035 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 17:11:06.263570 | orchestrator | Monday 02 June 2025 17:11:06 +0000 (0:00:00.079) 0:00:06.234 *********** 2025-06-02 17:11:06.879022 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:06.879780 | orchestrator | 2025-06-02 17:11:06.880705 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 17:11:06.881699 | orchestrator | Monday 02 June 2025 17:11:06 +0000 (0:00:00.616) 0:00:06.850 *********** 2025-06-02 17:11:08.001740 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:08.002792 | orchestrator | 2025-06-02 17:11:08.003872 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 17:11:08.004576 | orchestrator | Monday 02 June 2025 17:11:07 +0000 (0:00:01.123) 0:00:07.974 *********** 2025-06-02 17:11:09.020882 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:09.021484 | orchestrator | 2025-06-02 17:11:09.022979 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 17:11:09.023812 | orchestrator | Monday 02 June 2025 17:11:09 +0000 (0:00:01.018) 0:00:08.993 *********** 2025-06-02 17:11:09.115790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-02 17:11:09.116136 | orchestrator | 2025-06-02 17:11:09.117578 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 17:11:09.118842 | orchestrator | Monday 02 June 2025 17:11:09 +0000 (0:00:00.095) 0:00:09.089 *********** 2025-06-02 17:11:10.409272 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:10.410183 | orchestrator | 2025-06-02 17:11:10.412078 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:11:10.412162 | orchestrator | 2025-06-02 17:11:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:11:10.412557 | orchestrator | 2025-06-02 17:11:10 | INFO  | Please wait and do not abort execution. 2025-06-02 17:11:10.413611 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:11:10.415415 | orchestrator | 2025-06-02 17:11:10.416178 | orchestrator | 2025-06-02 17:11:10.417587 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:11:10.418495 | orchestrator | Monday 02 June 2025 17:11:10 +0000 (0:00:01.293) 0:00:10.382 *********** 2025-06-02 17:11:10.419387 | orchestrator | =============================================================================== 2025-06-02 17:11:10.420358 | orchestrator | Gathering Facts --------------------------------------------------------- 4.04s 2025-06-02 17:11:10.420632 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.29s 2025-06-02 17:11:10.421628 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.14s 2025-06-02 17:11:10.422263 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.12s 2025-06-02 17:11:10.423255 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.02s 2025-06-02 17:11:10.423746 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.62s 2025-06-02 17:11:10.424714 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-06-02 17:11:10.424992 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.10s 2025-06-02 17:11:10.425452 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-06-02 17:11:10.425956 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-02 17:11:10.426466 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-02 17:11:10.427307 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-02 17:11:10.427645 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-02 17:11:10.915455 | orchestrator | + osism apply sshconfig 2025-06-02 17:11:12.687119 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:11:12.687224 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:11:12.687239 | orchestrator | Registering Redlock._release_script 2025-06-02 17:11:12.768385 | orchestrator | 2025-06-02 17:11:12 | INFO  | Task 7598f96f-961a-4fbe-b86b-f58b0e4520ae (sshconfig) was prepared for execution. 2025-06-02 17:11:12.768477 | orchestrator | 2025-06-02 17:11:12 | INFO  | It takes a moment until task 7598f96f-961a-4fbe-b86b-f58b0e4520ae (sshconfig) has been started and output is visible here. 2025-06-02 17:11:16.850834 | orchestrator | 2025-06-02 17:11:16.851401 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-02 17:11:16.853235 | orchestrator | 2025-06-02 17:11:16.853258 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-02 17:11:16.853671 | orchestrator | Monday 02 June 2025 17:11:16 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-02 17:11:17.446356 | orchestrator | ok: [testbed-manager] 2025-06-02 17:11:17.446945 | orchestrator | 2025-06-02 17:11:17.448209 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-02 17:11:17.448949 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:00.597) 0:00:00.764 *********** 2025-06-02 17:11:17.967527 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:17.969272 | orchestrator | 2025-06-02 17:11:17.970675 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-02 17:11:17.971570 | orchestrator | Monday 02 June 2025 17:11:17 +0000 (0:00:00.518) 0:00:01.283 *********** 2025-06-02 17:11:24.041050 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:11:24.041198 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:11:24.041355 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:11:24.041537 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:11:24.041606 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:11:24.042737 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:11:24.043330 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:11:24.043566 | orchestrator | 2025-06-02 17:11:24.046173 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-02 17:11:24.048507 | orchestrator | Monday 02 June 2025 17:11:24 +0000 (0:00:06.074) 0:00:07.357 *********** 2025-06-02 17:11:24.101354 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:24.101814 | orchestrator | 2025-06-02 17:11:24.102674 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-02 17:11:24.103492 | orchestrator | Monday 02 June 2025 17:11:24 +0000 (0:00:00.061) 0:00:07.419 *********** 2025-06-02 17:11:24.732740 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:24.732881 | orchestrator | 2025-06-02 17:11:24.732955 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:11:24.733363 | orchestrator | 2025-06-02 17:11:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:11:24.733773 | orchestrator | 2025-06-02 17:11:24 | INFO  | Please wait and do not abort execution. 2025-06-02 17:11:24.734622 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:11:24.737216 | orchestrator | 2025-06-02 17:11:24.737262 | orchestrator | 2025-06-02 17:11:24.737276 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:11:24.737310 | orchestrator | Monday 02 June 2025 17:11:24 +0000 (0:00:00.632) 0:00:08.051 *********** 2025-06-02 17:11:24.737322 | orchestrator | =============================================================================== 2025-06-02 17:11:24.737333 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.07s 2025-06-02 17:11:24.737344 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2025-06-02 17:11:24.737355 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.60s 2025-06-02 17:11:24.737832 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2025-06-02 17:11:24.738100 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-06-02 17:11:25.273036 | orchestrator | + osism apply known-hosts 2025-06-02 17:11:26.783859 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:11:26.783959 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:11:26.783974 | orchestrator | Registering Redlock._release_script 2025-06-02 17:11:26.857602 | orchestrator | 2025-06-02 17:11:26 | INFO  | Task 3d6704ee-364b-4d43-89e6-81bdda46da90 (known-hosts) was prepared for execution. 2025-06-02 17:11:26.857693 | orchestrator | 2025-06-02 17:11:26 | INFO  | It takes a moment until task 3d6704ee-364b-4d43-89e6-81bdda46da90 (known-hosts) has been started and output is visible here. 2025-06-02 17:11:31.023161 | orchestrator | 2025-06-02 17:11:31.023625 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-02 17:11:31.024318 | orchestrator | 2025-06-02 17:11:31.025865 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-02 17:11:31.026834 | orchestrator | Monday 02 June 2025 17:11:31 +0000 (0:00:00.180) 0:00:00.180 *********** 2025-06-02 17:11:37.354170 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:11:37.356602 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:11:37.356634 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:11:37.357425 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:11:37.358468 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:11:37.359261 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:11:37.360087 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:11:37.360871 | orchestrator | 2025-06-02 17:11:37.361579 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-02 17:11:37.362269 | orchestrator | Monday 02 June 2025 17:11:37 +0000 (0:00:06.332) 0:00:06.513 *********** 2025-06-02 17:11:37.543403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 17:11:37.543966 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 17:11:37.546108 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 17:11:37.546143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 17:11:37.546154 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 17:11:37.546216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 17:11:37.546783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 17:11:37.547508 | orchestrator | 2025-06-02 17:11:37.548531 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:37.549050 | orchestrator | Monday 02 June 2025 17:11:37 +0000 (0:00:00.191) 0:00:06.704 *********** 2025-06-02 17:11:38.858227 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAUVh0DU0GtdpDKtwq8bvBPrzKinNWpis+VIKsjI4+ze) 2025-06-02 17:11:38.860063 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ36XCIjRXToviP0KI8DaqQ8Mv5x4vie1C+ynBI6vtFQmOOuWTYUB4ivJqXq1ia5S2ZlBZcGow382BPtFy/4orZVWRcUUbY3jRVRoXXoeTl1jT0mmmwNwrqE1PkX5+IUhFIO6RmxfYIwPn3LpckGKM1EmuuF3f4NjhSPlBdB8uO4lrvt1iO9sBLSi64H1e2ETZr8pRK6fq5MC/g5LQzgTNLUaMsPIdiS4mya6zq6U5NGjMZZCjLdzWPCpHb/DCUrQOpfqsRgTdhdzO+vmNoz+PCCPR+tfMjhmbnWVMcGW15FN0fyMYstV25rMFs3IglS8jqEgqevatINKTVN4tii52u/qKLpV75rSOpvHeeVVVsoFXQEVGmKgtw3sTVnYwEQiIeRn8p6f8M8od8/5kRW1k6EVLfHUlfsZ1paSvY89AkS5BCkEBOSRy55vwC+rLe7veBiHhuP697IPZYGgabjVJ7qpZYVCffH6xkPSi6LbONLwV3dF5LXkJjv2XRqaePy0=) 2025-06-02 17:11:38.860537 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVfW4605TsLf/IK6YkQOIPfZi0XgkPOatXWcd2ECqXq4kiqAMN/9OJ+l+geUbP5ynyfN9TwNtxjgtytg6Yj050=) 2025-06-02 17:11:38.861077 | orchestrator | 2025-06-02 17:11:38.861845 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:38.863065 | orchestrator | Monday 02 June 2025 17:11:38 +0000 (0:00:01.313) 0:00:08.017 *********** 2025-06-02 17:11:40.010683 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzjk3P6SRP6T5A9hBf1bfXwO+GMXPvw7u/9ny83Ps+ep8XKsgSPUKOEwrczQ430r3rDSCkcsr6IjxhrT/OIGPEe0leKzmO5Xk5guxjSUMSkaAMwEJdAj4/9zFsbQ15oAeNS5CwM8MQiRNCTEQMiMKE7WJokEv4VVVvv1PegEofE4UBY66PpoEwRRfTyjaMguT4x0pO1jJLtOzl1+AJQSW4qwz8W3Q5w/4KPcK4LT6n6VQ09rOuNjif+RHakMn68UlbMf6ylkLN9B1SacRvZGj4lLHNP/TmI+XB6GQZApRREnFW655V0pGIM3ztQFiWya4siTsOE8cndM4dMzMZMgZJECCYZ7ZS/mPJCalUgQvlDm6J4lqHKZT62W109vQH2mgIXxGFVbIYGmo25v5DRrm54Fnch46stuvXXipUaA4ajI3P86ZfW7YX/dkrzi4gKmQgWxATN9Xc7j8Gkz1zMB9UFLJLpJV+5Fc9iluRUwFdMHMu9MnYooINvSbiVInun5M=) 2025-06-02 17:11:40.012904 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGXoKALoPFDwIDRC4KdplqN5HZSnKfgdUahN2gU5ylXF2uLYGarlmyJDeqaCGHu0341BJxonGsSdhV2sB+qwZmI=) 2025-06-02 17:11:40.014836 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF9+mA3J+5b1DD7Vcg+sKKIDL4Q0byST+aM4pt5oNwNg) 2025-06-02 17:11:40.016250 | orchestrator | 2025-06-02 17:11:40.018173 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:40.018963 | orchestrator | Monday 02 June 2025 17:11:40 +0000 (0:00:01.152) 0:00:09.170 *********** 2025-06-02 17:11:41.123617 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnVdkh5EOOdLsOOFQmzYA/7jof1zmL+LfqX/uzFrKg9pbM0f+RjN3Yyqmi4dxjmtVz5HY9hvhazYRmwnKPTTXOUWGB43cAeYaGbPDnkp4RvqJskeoz2aS30lsbUiI+pHPWCLMDJ6s9Yy8/8zjcI7gOUZdAJBG7h36MovinCFOLxvEDMW5w/JODV9BQvD940AbqmqaV3eSQ9mkMtlbjULeQaY6OaV5iv/0RYWetGZU8M1Kvl+cTFmtdNOahtPlSIx2rOvOcMG476f+rMSZhFsQqmIKTPt3GmREw8S9plAKan3EqPKMjwjvodFehH3AhNE9OK6q3yV0pXusoO9u31xfWJjlAWLfuotYwmsQDeqxad1ITOJVj4Jxzd0gdRClEhO8BKcgnVUS2pvyFlBgJMXMxO/NfDO/euF1WxxaY3X0dn65af3B0n2z9+Mm1gM+HZ+S3QuEiJ0SuEECDlB6Lbq54Oza3Kl9o4Sd/0l8Kn74jh/AwCERZYVn42gyd2r1NJ8k=) 2025-06-02 17:11:41.123817 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMEPlFCAmSXZ3zcFkfE26rAdjCCVd78eMN7AMPtlzvioq5ZI41eIWYyvS5Nsuir4jHQi2IsDLyyKjmafvon6Om0=) 2025-06-02 17:11:41.125162 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGo2pc6f68PiLbu80Qi+u8C8/ELK70e0rDzNF6e+PJCU) 2025-06-02 17:11:41.125189 | orchestrator | 2025-06-02 17:11:41.125562 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:41.126607 | orchestrator | Monday 02 June 2025 17:11:41 +0000 (0:00:01.112) 0:00:10.283 *********** 2025-06-02 17:11:42.267250 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUbuHiYM3Yt2vhF1M92SDTnGoyo7qRrIjEooDiM9eFcicrVZX29RPcdmOiRO7YBCB1IEltKJfFQrWBDDt9B1BtWdBNrmDTcDla8MzFJXeGdviGLMcq+NSJW34A578dJUZk5iHlq2KsD6nKp9KporMIESBP81K5tO6SSTKM/GAmFkHT2ozN9p5f+EnHWRH6+XlExnn00w7hbhCRoDq5ePq3g3Pd/SkDbAsk9dbCYYcQ3pkiq1MccZR1/WF1pCJD2d7zW3QakBWQzSiwX/z65IuOU3W+V3ziTKoR73iWH2l4ICpD2eeOa12kHBX50HKZS7xrF7MRx333HQqyAT6Lhb1Mz88SZzEqbF1vx9ylpz1wx26aJV+xnbR6LlXPXncYqXCzKujKJ5d47MLNOvXcZn3N1oWJWvqAGwvXGk8Wng7ofX7ZdYpLOTvTvzlUoUF1fasC8l6uZArgfXaxPgu+FgSzMAY6F+11A8ZotSF7s5/TGpjkImGuuRV/yoIUnc83GqM=) 2025-06-02 17:11:42.267552 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEv4//Fzk5qw6NbCCXCOkWZYLxQF6wDi7EB4NaLe5Xd1YW20zDxZoRgfTxXF8kLYXOKqpIBgW+e8xt2yrjNqzNg=) 2025-06-02 17:11:42.269173 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8EcPHbdT5IBKjKTsPXtIGg7YVXa/ihYLd+dfXbsuXZ) 2025-06-02 17:11:42.270526 | orchestrator | 2025-06-02 17:11:42.272756 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:42.273775 | orchestrator | Monday 02 June 2025 17:11:42 +0000 (0:00:01.143) 0:00:11.426 *********** 2025-06-02 17:11:43.407286 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC045Zk5ar9RyNLeUuaUES3W/Y1xOJFasy9Lsmy1hJI/77vUIlpLH9Vz2uYCIZdrH+qu8fK8umIBSaI51C3ftUWFUsrJfNYqu8A1U0IZxja1tEt6daDFeHZGfRjM05o3r5KrZi8Qmc+H2n7CFwDzWqs/aYYTV12BFMvLE3aUBJ1WGNYNyoEhN8AO6jsmBLoXqS6YHIcUNUMHa/Qwu7kDM5GM+KJn2caoPydS/+Rzdr15oXA9TvJ/Wo+I0PX5e+7hBx4diVkV6GcmDwyZF+JrlUbXZGT31ghNmD7CnrPWEbP7d7r6nd+IaNYmGdH9QxO1wfA02wRNQ9oT45v89TGlKthtc+kwK+mn9Xbb75oHcZt+zGUTzStqMCHZ0aIDF31uxWh9Yd1HEDxqRLev1NmPK2xxmF8CDtR6Fy1FvQxvCkFlH2eb+iis1KLFHOeRtOsykxwTJiM/jf03u/NluifRay3vbeBC0VfnBj9eytf1Wpe98vlZTk7N7le1AkGKkLLMis=) 2025-06-02 17:11:43.407454 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF0ev1iFpagtPGIY0MbFTlHB7eowGFy3DG1NthkNt8Omed5IN6uZvGHQNaUe2B5hTciZ/2U68KBjcEJaTprVUbI=) 2025-06-02 17:11:43.408790 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMXYXNGpSLQiB4871BONRecitMtzXu9ZT67WZdNaXzck) 2025-06-02 17:11:43.410105 | orchestrator | 2025-06-02 17:11:43.410915 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:43.411709 | orchestrator | Monday 02 June 2025 17:11:43 +0000 (0:00:01.138) 0:00:12.565 *********** 2025-06-02 17:11:44.572932 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRRQjGnFwfZ/tzCZOjIb6nuDQJu5IjLtNolkWwBBEVrqwV/HWae58ev97bGRkzJXauFzSKZ0qYeG3pIrNFm7JG2kSsy5eYEbrmDRgWFJYfhttsBE9SctKtepixc5EUfUtAolbKslhcivbzVAZgMv7p51s+1/SsbYVbyhRQaovEqCGhRNmT97NmdoS/MLZdLetk9T8zPEgYvyjVZ8zKBCBHcDl6ncKxU00qYiQASZAEvrVGemd/DaTLygSXjQXrBpxT6ACy4DhpBVsatbSWvA3fp76GigTuQLrKgqf2hgfXY0KAZ7CvT1TT47aROd1iq5iCaRJO+UaiHLkwyBfAyX7sZc3w1rriddMMUMi4715pXHrPT+KzutVhML2ggn/6Pw/0YBSFeDRKDkV3RljijQcPiuHZ0R6CaXuvVY+sTAupB7vWf7wPKCwrWpk+0dPaEROoq3XIzNURJdteDxSGCeiSicjE/lwvDSh0dXMyiyezRtYOXIQBDgW9F5TT9oPlWCs=) 2025-06-02 17:11:44.573086 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFKX0fr9BP6e7BDnxVrHmhcRlZj2YjuYhpcJ8JUIKJuoJYAejjjtCkNn1CmT7fgonRdNNKLnfbWy5FM0ijTRhE8=) 2025-06-02 17:11:44.573115 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIAmNa/5q/DNNJmaeh6pAZs7B2yjA2vtDv0GG2qGW8T9) 2025-06-02 17:11:44.573253 | orchestrator | 2025-06-02 17:11:44.573902 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:44.574643 | orchestrator | Monday 02 June 2025 17:11:44 +0000 (0:00:01.163) 0:00:13.728 *********** 2025-06-02 17:11:45.661696 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKy++b8F32i9m6K4Y1uUuBrJpaNhfOyWCznhIgJfmja6wSM54BpK8OQwGtksAvjHDaxgaMGhWtWRD4JbQqvLVJXZDtGQuGwJWQfD3uKV+SLq4bN1GYAD+vONpLVUBvLBGbPAvs7VHlQloXy9jWW9KNOQaif65iYeDsco3/oyjc9rnCgMSLx/uBEkbWW4nJsFjBOjhrVaOrm5aNqAILZUijr+NmcMurcXoE34AllVzHq5hrIjzUfpNAOuSkZ0lYTt//ZPEJ8Pv12FwsnkHjPOPVAtKsB/VFvkcy+Jqg2So+Bt7gbFI80PwxcBU/I1LNab5a5Db4jht2S+dkVjvImbZ8B3gJ01/YKxNYvjH7e6n1Gm4BJP07Be4sWtgmppw/ELLuB3y1HaIb+ckTvTfxRpuZAwf3Hr71BfbaqNIbcvIgPd34c6XvZFj2wlkwdrnvqdDa/1SyTmN5cvLnUUpK0PxToYHd8l7Nd4okr4VD9yhu1ZFG8th5Kwe6uA1UeMbuklk=) 2025-06-02 17:11:45.661804 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEr6ye/XXMlWzDqtdu7GSSsKPnv5OGnrY4w5OBA5rKa1zNkqsYORsJr0c00Zzz09XYfynokKpq9vzmHKBvgDPI=) 2025-06-02 17:11:45.662546 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAQlbi/1bSHj/mEqSgw0VLWABrWpmSEMfO6d9B5hg4y6) 2025-06-02 17:11:45.663202 | orchestrator | 2025-06-02 17:11:45.664459 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-02 17:11:45.665089 | orchestrator | Monday 02 June 2025 17:11:45 +0000 (0:00:01.091) 0:00:14.820 *********** 2025-06-02 17:11:50.872839 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-02 17:11:50.872955 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-02 17:11:50.872972 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-02 17:11:50.872985 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-02 17:11:50.873069 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-02 17:11:50.874179 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-02 17:11:50.875107 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-02 17:11:50.876238 | orchestrator | 2025-06-02 17:11:50.876563 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-02 17:11:50.877950 | orchestrator | Monday 02 June 2025 17:11:50 +0000 (0:00:05.211) 0:00:20.031 *********** 2025-06-02 17:11:51.032350 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-02 17:11:51.033038 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-02 17:11:51.033281 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-02 17:11:51.036373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-02 17:11:51.037444 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-02 17:11:51.038342 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-02 17:11:51.039027 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-02 17:11:51.040003 | orchestrator | 2025-06-02 17:11:51.040761 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:51.041781 | orchestrator | Monday 02 June 2025 17:11:51 +0000 (0:00:00.160) 0:00:20.192 *********** 2025-06-02 17:11:52.053788 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDQ36XCIjRXToviP0KI8DaqQ8Mv5x4vie1C+ynBI6vtFQmOOuWTYUB4ivJqXq1ia5S2ZlBZcGow382BPtFy/4orZVWRcUUbY3jRVRoXXoeTl1jT0mmmwNwrqE1PkX5+IUhFIO6RmxfYIwPn3LpckGKM1EmuuF3f4NjhSPlBdB8uO4lrvt1iO9sBLSi64H1e2ETZr8pRK6fq5MC/g5LQzgTNLUaMsPIdiS4mya6zq6U5NGjMZZCjLdzWPCpHb/DCUrQOpfqsRgTdhdzO+vmNoz+PCCPR+tfMjhmbnWVMcGW15FN0fyMYstV25rMFs3IglS8jqEgqevatINKTVN4tii52u/qKLpV75rSOpvHeeVVVsoFXQEVGmKgtw3sTVnYwEQiIeRn8p6f8M8od8/5kRW1k6EVLfHUlfsZ1paSvY89AkS5BCkEBOSRy55vwC+rLe7veBiHhuP697IPZYGgabjVJ7qpZYVCffH6xkPSi6LbONLwV3dF5LXkJjv2XRqaePy0=) 2025-06-02 17:11:52.053900 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVfW4605TsLf/IK6YkQOIPfZi0XgkPOatXWcd2ECqXq4kiqAMN/9OJ+l+geUbP5ynyfN9TwNtxjgtytg6Yj050=) 2025-06-02 17:11:52.054201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAUVh0DU0GtdpDKtwq8bvBPrzKinNWpis+VIKsjI4+ze) 2025-06-02 17:11:52.054579 | orchestrator | 2025-06-02 17:11:52.055010 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:52.055451 | orchestrator | Monday 02 June 2025 17:11:52 +0000 (0:00:01.019) 0:00:21.212 *********** 2025-06-02 17:11:53.067245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF9+mA3J+5b1DD7Vcg+sKKIDL4Q0byST+aM4pt5oNwNg) 2025-06-02 17:11:53.067702 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCzjk3P6SRP6T5A9hBf1bfXwO+GMXPvw7u/9ny83Ps+ep8XKsgSPUKOEwrczQ430r3rDSCkcsr6IjxhrT/OIGPEe0leKzmO5Xk5guxjSUMSkaAMwEJdAj4/9zFsbQ15oAeNS5CwM8MQiRNCTEQMiMKE7WJokEv4VVVvv1PegEofE4UBY66PpoEwRRfTyjaMguT4x0pO1jJLtOzl1+AJQSW4qwz8W3Q5w/4KPcK4LT6n6VQ09rOuNjif+RHakMn68UlbMf6ylkLN9B1SacRvZGj4lLHNP/TmI+XB6GQZApRREnFW655V0pGIM3ztQFiWya4siTsOE8cndM4dMzMZMgZJECCYZ7ZS/mPJCalUgQvlDm6J4lqHKZT62W109vQH2mgIXxGFVbIYGmo25v5DRrm54Fnch46stuvXXipUaA4ajI3P86ZfW7YX/dkrzi4gKmQgWxATN9Xc7j8Gkz1zMB9UFLJLpJV+5Fc9iluRUwFdMHMu9MnYooINvSbiVInun5M=) 2025-06-02 17:11:53.068738 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGXoKALoPFDwIDRC4KdplqN5HZSnKfgdUahN2gU5ylXF2uLYGarlmyJDeqaCGHu0341BJxonGsSdhV2sB+qwZmI=) 2025-06-02 17:11:53.071481 | orchestrator | 2025-06-02 17:11:53.072372 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:53.073213 | orchestrator | Monday 02 June 2025 17:11:53 +0000 (0:00:01.015) 0:00:22.227 *********** 2025-06-02 17:11:54.060136 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnVdkh5EOOdLsOOFQmzYA/7jof1zmL+LfqX/uzFrKg9pbM0f+RjN3Yyqmi4dxjmtVz5HY9hvhazYRmwnKPTTXOUWGB43cAeYaGbPDnkp4RvqJskeoz2aS30lsbUiI+pHPWCLMDJ6s9Yy8/8zjcI7gOUZdAJBG7h36MovinCFOLxvEDMW5w/JODV9BQvD940AbqmqaV3eSQ9mkMtlbjULeQaY6OaV5iv/0RYWetGZU8M1Kvl+cTFmtdNOahtPlSIx2rOvOcMG476f+rMSZhFsQqmIKTPt3GmREw8S9plAKan3EqPKMjwjvodFehH3AhNE9OK6q3yV0pXusoO9u31xfWJjlAWLfuotYwmsQDeqxad1ITOJVj4Jxzd0gdRClEhO8BKcgnVUS2pvyFlBgJMXMxO/NfDO/euF1WxxaY3X0dn65af3B0n2z9+Mm1gM+HZ+S3QuEiJ0SuEECDlB6Lbq54Oza3Kl9o4Sd/0l8Kn74jh/AwCERZYVn42gyd2r1NJ8k=) 2025-06-02 17:11:54.061207 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMEPlFCAmSXZ3zcFkfE26rAdjCCVd78eMN7AMPtlzvioq5ZI41eIWYyvS5Nsuir4jHQi2IsDLyyKjmafvon6Om0=) 2025-06-02 17:11:54.062516 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGo2pc6f68PiLbu80Qi+u8C8/ELK70e0rDzNF6e+PJCU) 2025-06-02 17:11:54.063653 | orchestrator | 2025-06-02 17:11:54.064938 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:54.066162 | orchestrator | Monday 02 June 2025 17:11:54 +0000 (0:00:00.992) 0:00:23.220 *********** 2025-06-02 17:11:55.075245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL8EcPHbdT5IBKjKTsPXtIGg7YVXa/ihYLd+dfXbsuXZ) 2025-06-02 17:11:55.076045 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUbuHiYM3Yt2vhF1M92SDTnGoyo7qRrIjEooDiM9eFcicrVZX29RPcdmOiRO7YBCB1IEltKJfFQrWBDDt9B1BtWdBNrmDTcDla8MzFJXeGdviGLMcq+NSJW34A578dJUZk5iHlq2KsD6nKp9KporMIESBP81K5tO6SSTKM/GAmFkHT2ozN9p5f+EnHWRH6+XlExnn00w7hbhCRoDq5ePq3g3Pd/SkDbAsk9dbCYYcQ3pkiq1MccZR1/WF1pCJD2d7zW3QakBWQzSiwX/z65IuOU3W+V3ziTKoR73iWH2l4ICpD2eeOa12kHBX50HKZS7xrF7MRx333HQqyAT6Lhb1Mz88SZzEqbF1vx9ylpz1wx26aJV+xnbR6LlXPXncYqXCzKujKJ5d47MLNOvXcZn3N1oWJWvqAGwvXGk8Wng7ofX7ZdYpLOTvTvzlUoUF1fasC8l6uZArgfXaxPgu+FgSzMAY6F+11A8ZotSF7s5/TGpjkImGuuRV/yoIUnc83GqM=) 2025-06-02 17:11:55.077232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEv4//Fzk5qw6NbCCXCOkWZYLxQF6wDi7EB4NaLe5Xd1YW20zDxZoRgfTxXF8kLYXOKqpIBgW+e8xt2yrjNqzNg=) 2025-06-02 17:11:55.078135 | orchestrator | 2025-06-02 17:11:55.079037 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:55.079831 | orchestrator | Monday 02 June 2025 17:11:55 +0000 (0:00:01.014) 0:00:24.235 *********** 2025-06-02 17:11:56.101183 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBF0ev1iFpagtPGIY0MbFTlHB7eowGFy3DG1NthkNt8Omed5IN6uZvGHQNaUe2B5hTciZ/2U68KBjcEJaTprVUbI=) 2025-06-02 17:11:56.102102 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC045Zk5ar9RyNLeUuaUES3W/Y1xOJFasy9Lsmy1hJI/77vUIlpLH9Vz2uYCIZdrH+qu8fK8umIBSaI51C3ftUWFUsrJfNYqu8A1U0IZxja1tEt6daDFeHZGfRjM05o3r5KrZi8Qmc+H2n7CFwDzWqs/aYYTV12BFMvLE3aUBJ1WGNYNyoEhN8AO6jsmBLoXqS6YHIcUNUMHa/Qwu7kDM5GM+KJn2caoPydS/+Rzdr15oXA9TvJ/Wo+I0PX5e+7hBx4diVkV6GcmDwyZF+JrlUbXZGT31ghNmD7CnrPWEbP7d7r6nd+IaNYmGdH9QxO1wfA02wRNQ9oT45v89TGlKthtc+kwK+mn9Xbb75oHcZt+zGUTzStqMCHZ0aIDF31uxWh9Yd1HEDxqRLev1NmPK2xxmF8CDtR6Fy1FvQxvCkFlH2eb+iis1KLFHOeRtOsykxwTJiM/jf03u/NluifRay3vbeBC0VfnBj9eytf1Wpe98vlZTk7N7le1AkGKkLLMis=) 2025-06-02 17:11:56.103177 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMXYXNGpSLQiB4871BONRecitMtzXu9ZT67WZdNaXzck) 2025-06-02 17:11:56.104254 | orchestrator | 2025-06-02 17:11:56.104894 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:56.105279 | orchestrator | Monday 02 June 2025 17:11:56 +0000 (0:00:01.026) 0:00:25.261 *********** 2025-06-02 17:11:57.122282 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRRQjGnFwfZ/tzCZOjIb6nuDQJu5IjLtNolkWwBBEVrqwV/HWae58ev97bGRkzJXauFzSKZ0qYeG3pIrNFm7JG2kSsy5eYEbrmDRgWFJYfhttsBE9SctKtepixc5EUfUtAolbKslhcivbzVAZgMv7p51s+1/SsbYVbyhRQaovEqCGhRNmT97NmdoS/MLZdLetk9T8zPEgYvyjVZ8zKBCBHcDl6ncKxU00qYiQASZAEvrVGemd/DaTLygSXjQXrBpxT6ACy4DhpBVsatbSWvA3fp76GigTuQLrKgqf2hgfXY0KAZ7CvT1TT47aROd1iq5iCaRJO+UaiHLkwyBfAyX7sZc3w1rriddMMUMi4715pXHrPT+KzutVhML2ggn/6Pw/0YBSFeDRKDkV3RljijQcPiuHZ0R6CaXuvVY+sTAupB7vWf7wPKCwrWpk+0dPaEROoq3XIzNURJdteDxSGCeiSicjE/lwvDSh0dXMyiyezRtYOXIQBDgW9F5TT9oPlWCs=) 2025-06-02 17:11:57.122544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFKX0fr9BP6e7BDnxVrHmhcRlZj2YjuYhpcJ8JUIKJuoJYAejjjtCkNn1CmT7fgonRdNNKLnfbWy5FM0ijTRhE8=) 2025-06-02 17:11:57.123519 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIAmNa/5q/DNNJmaeh6pAZs7B2yjA2vtDv0GG2qGW8T9) 2025-06-02 17:11:57.124200 | orchestrator | 2025-06-02 17:11:57.124939 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-02 17:11:57.125515 | orchestrator | Monday 02 June 2025 17:11:57 +0000 (0:00:01.021) 0:00:26.282 *********** 2025-06-02 17:11:58.135222 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKy++b8F32i9m6K4Y1uUuBrJpaNhfOyWCznhIgJfmja6wSM54BpK8OQwGtksAvjHDaxgaMGhWtWRD4JbQqvLVJXZDtGQuGwJWQfD3uKV+SLq4bN1GYAD+vONpLVUBvLBGbPAvs7VHlQloXy9jWW9KNOQaif65iYeDsco3/oyjc9rnCgMSLx/uBEkbWW4nJsFjBOjhrVaOrm5aNqAILZUijr+NmcMurcXoE34AllVzHq5hrIjzUfpNAOuSkZ0lYTt//ZPEJ8Pv12FwsnkHjPOPVAtKsB/VFvkcy+Jqg2So+Bt7gbFI80PwxcBU/I1LNab5a5Db4jht2S+dkVjvImbZ8B3gJ01/YKxNYvjH7e6n1Gm4BJP07Be4sWtgmppw/ELLuB3y1HaIb+ckTvTfxRpuZAwf3Hr71BfbaqNIbcvIgPd34c6XvZFj2wlkwdrnvqdDa/1SyTmN5cvLnUUpK0PxToYHd8l7Nd4okr4VD9yhu1ZFG8th5Kwe6uA1UeMbuklk=) 2025-06-02 17:11:58.135517 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDEr6ye/XXMlWzDqtdu7GSSsKPnv5OGnrY4w5OBA5rKa1zNkqsYORsJr0c00Zzz09XYfynokKpq9vzmHKBvgDPI=) 2025-06-02 17:11:58.135968 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAQlbi/1bSHj/mEqSgw0VLWABrWpmSEMfO6d9B5hg4y6) 2025-06-02 17:11:58.136493 | orchestrator | 2025-06-02 17:11:58.137211 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-02 17:11:58.138539 | orchestrator | Monday 02 June 2025 17:11:58 +0000 (0:00:01.014) 0:00:27.297 *********** 2025-06-02 17:11:58.287018 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 17:11:58.287215 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:11:58.287830 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 17:11:58.288724 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 17:11:58.289711 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 17:11:58.290272 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 17:11:58.290666 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 17:11:58.291136 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:58.291888 | orchestrator | 2025-06-02 17:11:58.292637 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-02 17:11:58.293274 | orchestrator | Monday 02 June 2025 17:11:58 +0000 (0:00:00.152) 0:00:27.449 *********** 2025-06-02 17:11:58.335246 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:58.335981 | orchestrator | 2025-06-02 17:11:58.336551 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-02 17:11:58.337234 | orchestrator | Monday 02 June 2025 17:11:58 +0000 (0:00:00.047) 0:00:27.497 *********** 2025-06-02 17:11:58.388444 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:11:58.389223 | orchestrator | 2025-06-02 17:11:58.389892 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-02 17:11:58.390792 | orchestrator | Monday 02 June 2025 17:11:58 +0000 (0:00:00.052) 0:00:27.550 *********** 2025-06-02 17:11:59.011956 | orchestrator | changed: [testbed-manager] 2025-06-02 17:11:59.012447 | orchestrator | 2025-06-02 17:11:59.013186 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:11:59.013471 | orchestrator | 2025-06-02 17:11:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:11:59.013751 | orchestrator | 2025-06-02 17:11:59 | INFO  | Please wait and do not abort execution. 2025-06-02 17:11:59.014557 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:11:59.014904 | orchestrator | 2025-06-02 17:11:59.015703 | orchestrator | 2025-06-02 17:11:59.016231 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:11:59.016771 | orchestrator | Monday 02 June 2025 17:11:59 +0000 (0:00:00.623) 0:00:28.173 *********** 2025-06-02 17:11:59.017259 | orchestrator | =============================================================================== 2025-06-02 17:11:59.017747 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.33s 2025-06-02 17:11:59.018286 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.21s 2025-06-02 17:11:59.018759 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.31s 2025-06-02 17:11:59.019337 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.16s 2025-06-02 17:11:59.019747 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-06-02 17:11:59.020198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-02 17:11:59.020536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-06-02 17:11:59.020763 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-06-02 17:11:59.021253 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-02 17:11:59.021591 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-02 17:11:59.021917 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 17:11:59.022313 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 17:11:59.022731 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-06-02 17:11:59.023093 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 17:11:59.023531 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-06-02 17:11:59.023996 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-06-02 17:11:59.024233 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.62s 2025-06-02 17:11:59.025090 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-06-02 17:11:59.025967 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-02 17:11:59.026586 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-06-02 17:11:59.371898 | orchestrator | + osism apply squid 2025-06-02 17:12:00.955686 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:12:00.955814 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:12:00.955829 | orchestrator | Registering Redlock._release_script 2025-06-02 17:12:01.018660 | orchestrator | 2025-06-02 17:12:01 | INFO  | Task 20d8ac7b-008d-4217-8243-091d31123739 (squid) was prepared for execution. 2025-06-02 17:12:01.019390 | orchestrator | 2025-06-02 17:12:01 | INFO  | It takes a moment until task 20d8ac7b-008d-4217-8243-091d31123739 (squid) has been started and output is visible here. 2025-06-02 17:12:05.422535 | orchestrator | 2025-06-02 17:12:05.423001 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-02 17:12:05.424689 | orchestrator | 2025-06-02 17:12:05.426181 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-02 17:12:05.427281 | orchestrator | Monday 02 June 2025 17:12:05 +0000 (0:00:00.192) 0:00:00.192 *********** 2025-06-02 17:12:05.514397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:12:05.514500 | orchestrator | 2025-06-02 17:12:05.514914 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-02 17:12:05.515838 | orchestrator | Monday 02 June 2025 17:12:05 +0000 (0:00:00.096) 0:00:00.288 *********** 2025-06-02 17:12:07.106119 | orchestrator | ok: [testbed-manager] 2025-06-02 17:12:07.106674 | orchestrator | 2025-06-02 17:12:07.107825 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-02 17:12:07.108011 | orchestrator | Monday 02 June 2025 17:12:07 +0000 (0:00:01.590) 0:00:01.879 *********** 2025-06-02 17:12:08.352225 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-02 17:12:08.352432 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-02 17:12:08.353379 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-02 17:12:08.354891 | orchestrator | 2025-06-02 17:12:08.356064 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-02 17:12:08.356329 | orchestrator | Monday 02 June 2025 17:12:08 +0000 (0:00:01.245) 0:00:03.125 *********** 2025-06-02 17:12:09.467445 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-02 17:12:09.467823 | orchestrator | 2025-06-02 17:12:09.469124 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-02 17:12:09.469513 | orchestrator | Monday 02 June 2025 17:12:09 +0000 (0:00:01.115) 0:00:04.240 *********** 2025-06-02 17:12:09.984655 | orchestrator | ok: [testbed-manager] 2025-06-02 17:12:09.985845 | orchestrator | 2025-06-02 17:12:09.985888 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-02 17:12:09.985965 | orchestrator | Monday 02 June 2025 17:12:09 +0000 (0:00:00.518) 0:00:04.759 *********** 2025-06-02 17:12:10.967644 | orchestrator | changed: [testbed-manager] 2025-06-02 17:12:10.967822 | orchestrator | 2025-06-02 17:12:10.968897 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-02 17:12:10.969443 | orchestrator | Monday 02 June 2025 17:12:10 +0000 (0:00:00.979) 0:00:05.738 *********** 2025-06-02 17:12:43.668173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-02 17:12:43.668292 | orchestrator | ok: [testbed-manager] 2025-06-02 17:12:43.669079 | orchestrator | 2025-06-02 17:12:43.669905 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-02 17:12:43.670710 | orchestrator | Monday 02 June 2025 17:12:43 +0000 (0:00:32.698) 0:00:38.437 *********** 2025-06-02 17:12:56.248038 | orchestrator | changed: [testbed-manager] 2025-06-02 17:12:56.248161 | orchestrator | 2025-06-02 17:12:56.248674 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-02 17:12:56.248703 | orchestrator | Monday 02 June 2025 17:12:56 +0000 (0:00:12.580) 0:00:51.018 *********** 2025-06-02 17:13:56.323114 | orchestrator | Pausing for 60 seconds 2025-06-02 17:13:56.323237 | orchestrator | changed: [testbed-manager] 2025-06-02 17:13:56.323255 | orchestrator | 2025-06-02 17:13:56.323268 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-02 17:13:56.324317 | orchestrator | Monday 02 June 2025 17:13:56 +0000 (0:01:00.074) 0:01:51.092 *********** 2025-06-02 17:13:56.385535 | orchestrator | ok: [testbed-manager] 2025-06-02 17:13:56.385849 | orchestrator | 2025-06-02 17:13:56.386359 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-02 17:13:56.387101 | orchestrator | Monday 02 June 2025 17:13:56 +0000 (0:00:00.066) 0:01:51.159 *********** 2025-06-02 17:13:57.021622 | orchestrator | changed: [testbed-manager] 2025-06-02 17:13:57.022553 | orchestrator | 2025-06-02 17:13:57.022689 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:13:57.023498 | orchestrator | 2025-06-02 17:13:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:13:57.023524 | orchestrator | 2025-06-02 17:13:57 | INFO  | Please wait and do not abort execution. 2025-06-02 17:13:57.024319 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:13:57.024741 | orchestrator | 2025-06-02 17:13:57.025411 | orchestrator | 2025-06-02 17:13:57.025823 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:13:57.026684 | orchestrator | Monday 02 June 2025 17:13:57 +0000 (0:00:00.636) 0:01:51.796 *********** 2025-06-02 17:13:57.027048 | orchestrator | =============================================================================== 2025-06-02 17:13:57.027817 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-02 17:13:57.027928 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.70s 2025-06-02 17:13:57.029055 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.58s 2025-06-02 17:13:57.029076 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.59s 2025-06-02 17:13:57.029416 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.25s 2025-06-02 17:13:57.029644 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-06-02 17:13:57.030303 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.98s 2025-06-02 17:13:57.030428 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.64s 2025-06-02 17:13:57.031238 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.52s 2025-06-02 17:13:57.031261 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-06-02 17:13:57.031713 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-02 17:13:57.557569 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 17:13:57.558069 | orchestrator | ++ semver latest 9.0.0 2025-06-02 17:13:57.603005 | orchestrator | + [[ -1 -lt 0 ]] 2025-06-02 17:13:57.603087 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 17:13:57.603102 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-02 17:13:59.360192 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:13:59.360315 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:13:59.360356 | orchestrator | Registering Redlock._release_script 2025-06-02 17:13:59.444052 | orchestrator | 2025-06-02 17:13:59 | INFO  | Task 33c7693f-d407-4a32-9ee3-82bad37c3315 (operator) was prepared for execution. 2025-06-02 17:13:59.444214 | orchestrator | 2025-06-02 17:13:59 | INFO  | It takes a moment until task 33c7693f-d407-4a32-9ee3-82bad37c3315 (operator) has been started and output is visible here. 2025-06-02 17:14:03.679823 | orchestrator | 2025-06-02 17:14:03.683245 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-02 17:14:03.683627 | orchestrator | 2025-06-02 17:14:03.686550 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 17:14:03.687762 | orchestrator | Monday 02 June 2025 17:14:03 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-02 17:14:07.062254 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:07.062792 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:07.063946 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:07.065046 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:07.066888 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:07.068244 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:07.069399 | orchestrator | 2025-06-02 17:14:07.070110 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-02 17:14:07.070863 | orchestrator | Monday 02 June 2025 17:14:07 +0000 (0:00:03.386) 0:00:03.552 *********** 2025-06-02 17:14:07.890099 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:07.891443 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:07.892531 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:07.893385 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:07.894220 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:07.895132 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:07.895494 | orchestrator | 2025-06-02 17:14:07.896141 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-02 17:14:07.896744 | orchestrator | 2025-06-02 17:14:07.897440 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-02 17:14:07.897956 | orchestrator | Monday 02 June 2025 17:14:07 +0000 (0:00:00.826) 0:00:04.379 *********** 2025-06-02 17:14:07.969950 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:07.996681 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:08.028174 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:08.073898 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:08.074552 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:08.074762 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:08.074784 | orchestrator | 2025-06-02 17:14:08.075687 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-02 17:14:08.075790 | orchestrator | Monday 02 June 2025 17:14:08 +0000 (0:00:00.183) 0:00:04.563 *********** 2025-06-02 17:14:08.161625 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:08.193532 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:08.231878 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:08.298726 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:08.299995 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:08.301108 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:08.301408 | orchestrator | 2025-06-02 17:14:08.302968 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-02 17:14:08.303012 | orchestrator | Monday 02 June 2025 17:14:08 +0000 (0:00:00.224) 0:00:04.787 *********** 2025-06-02 17:14:08.964806 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:08.965820 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:08.966936 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:08.967816 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:08.968247 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:08.970110 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:08.970390 | orchestrator | 2025-06-02 17:14:08.970990 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-02 17:14:08.971297 | orchestrator | Monday 02 June 2025 17:14:08 +0000 (0:00:00.665) 0:00:05.453 *********** 2025-06-02 17:14:09.787310 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:09.787640 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:09.788462 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:09.789078 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:09.789396 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:09.789767 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:09.790139 | orchestrator | 2025-06-02 17:14:09.790612 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-02 17:14:09.791240 | orchestrator | Monday 02 June 2025 17:14:09 +0000 (0:00:00.824) 0:00:06.277 *********** 2025-06-02 17:14:10.999087 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-02 17:14:11.000350 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-02 17:14:11.000586 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-02 17:14:11.002659 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-02 17:14:11.003587 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-02 17:14:11.005740 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-02 17:14:11.007037 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-02 17:14:11.008204 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-02 17:14:11.010083 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-02 17:14:11.011060 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-02 17:14:11.012292 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-02 17:14:11.013810 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-02 17:14:11.015162 | orchestrator | 2025-06-02 17:14:11.016930 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-02 17:14:11.016950 | orchestrator | Monday 02 June 2025 17:14:10 +0000 (0:00:01.209) 0:00:07.487 *********** 2025-06-02 17:14:12.369265 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:12.370624 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:12.371423 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:12.372250 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:12.373171 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:12.373918 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:12.374669 | orchestrator | 2025-06-02 17:14:12.375279 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-02 17:14:12.375874 | orchestrator | Monday 02 June 2025 17:14:12 +0000 (0:00:01.370) 0:00:08.858 *********** 2025-06-02 17:14:13.565694 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-02 17:14:13.568184 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-02 17:14:13.569600 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-02 17:14:13.634688 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.635539 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.636565 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.639600 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.640252 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.642357 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-02 17:14:13.642392 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.642450 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.644056 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.644085 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.644787 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.645132 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-02 17:14:13.645886 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.647021 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.647045 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.647569 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.647949 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.648804 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-02 17:14:13.649602 | orchestrator | 2025-06-02 17:14:13.649861 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-02 17:14:13.650377 | orchestrator | Monday 02 June 2025 17:14:13 +0000 (0:00:01.266) 0:00:10.124 *********** 2025-06-02 17:14:14.223517 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:14.223817 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:14.225465 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:14.226129 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:14.227176 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:14.227227 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:14.227693 | orchestrator | 2025-06-02 17:14:14.228125 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-02 17:14:14.229356 | orchestrator | Monday 02 June 2025 17:14:14 +0000 (0:00:00.587) 0:00:10.712 *********** 2025-06-02 17:14:14.319831 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:14.351624 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:14.418208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:14.419797 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:14.420829 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:14.421736 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:14.422676 | orchestrator | 2025-06-02 17:14:14.423417 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-02 17:14:14.424069 | orchestrator | Monday 02 June 2025 17:14:14 +0000 (0:00:00.195) 0:00:10.908 *********** 2025-06-02 17:14:15.203310 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:14:15.204418 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:15.204732 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 17:14:15.205429 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:14:15.206131 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 17:14:15.207160 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:15.207897 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:15.208793 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:15.208864 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:14:15.210190 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:15.212275 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:14:15.214004 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:15.214823 | orchestrator | 2025-06-02 17:14:15.216182 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-02 17:14:15.216960 | orchestrator | Monday 02 June 2025 17:14:15 +0000 (0:00:00.783) 0:00:11.691 *********** 2025-06-02 17:14:15.278155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:15.304684 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:15.327441 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:15.379999 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:15.380183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:15.381868 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:15.382595 | orchestrator | 2025-06-02 17:14:15.383365 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-02 17:14:15.383667 | orchestrator | Monday 02 June 2025 17:14:15 +0000 (0:00:00.176) 0:00:11.868 *********** 2025-06-02 17:14:15.447519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:15.470751 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:15.516790 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:15.550509 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:15.550693 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:15.551641 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:15.552521 | orchestrator | 2025-06-02 17:14:15.552548 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-02 17:14:15.552828 | orchestrator | Monday 02 June 2025 17:14:15 +0000 (0:00:00.173) 0:00:12.042 *********** 2025-06-02 17:14:15.631564 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:15.653706 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:15.679416 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:15.730186 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:15.730280 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:15.730414 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:15.732168 | orchestrator | 2025-06-02 17:14:15.732750 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-02 17:14:15.733261 | orchestrator | Monday 02 June 2025 17:14:15 +0000 (0:00:00.177) 0:00:12.219 *********** 2025-06-02 17:14:16.449180 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:16.449399 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:16.452267 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:16.452294 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:16.452307 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:16.453575 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:16.455117 | orchestrator | 2025-06-02 17:14:16.457658 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-02 17:14:16.458661 | orchestrator | Monday 02 June 2025 17:14:16 +0000 (0:00:00.717) 0:00:12.937 *********** 2025-06-02 17:14:16.575618 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:14:16.604502 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:14:16.730005 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:14:16.730176 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:16.731434 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:16.731773 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:16.733000 | orchestrator | 2025-06-02 17:14:16.733629 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:14:16.734127 | orchestrator | 2025-06-02 17:14:16 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:14:16.734157 | orchestrator | 2025-06-02 17:14:16 | INFO  | Please wait and do not abort execution. 2025-06-02 17:14:16.735795 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.736532 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.737636 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.738593 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.739554 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.739956 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:14:16.740561 | orchestrator | 2025-06-02 17:14:16.741404 | orchestrator | 2025-06-02 17:14:16.741950 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:14:16.742242 | orchestrator | Monday 02 June 2025 17:14:16 +0000 (0:00:00.282) 0:00:13.220 *********** 2025-06-02 17:14:16.743000 | orchestrator | =============================================================================== 2025-06-02 17:14:16.743692 | orchestrator | Gathering Facts --------------------------------------------------------- 3.39s 2025-06-02 17:14:16.744406 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.37s 2025-06-02 17:14:16.744595 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2025-06-02 17:14:16.745059 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-06-02 17:14:16.745727 | orchestrator | Do not require tty for all users ---------------------------------------- 0.83s 2025-06-02 17:14:16.746077 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-06-02 17:14:16.746439 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.78s 2025-06-02 17:14:16.746963 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-06-02 17:14:16.747287 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2025-06-02 17:14:16.747776 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.59s 2025-06-02 17:14:16.748112 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.28s 2025-06-02 17:14:16.748570 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2025-06-02 17:14:16.748918 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-06-02 17:14:16.749444 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.18s 2025-06-02 17:14:16.749949 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2025-06-02 17:14:16.750346 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-06-02 17:14:16.750501 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.17s 2025-06-02 17:14:17.289629 | orchestrator | + osism apply --environment custom facts 2025-06-02 17:14:19.040368 | orchestrator | 2025-06-02 17:14:19 | INFO  | Trying to run play facts in environment custom 2025-06-02 17:14:19.045343 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:14:19.045408 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:14:19.045421 | orchestrator | Registering Redlock._release_script 2025-06-02 17:14:19.110783 | orchestrator | 2025-06-02 17:14:19 | INFO  | Task fb03ad36-e160-441e-9df4-408c60c7e05b (facts) was prepared for execution. 2025-06-02 17:14:19.110864 | orchestrator | 2025-06-02 17:14:19 | INFO  | It takes a moment until task fb03ad36-e160-441e-9df4-408c60c7e05b (facts) has been started and output is visible here. 2025-06-02 17:14:23.162593 | orchestrator | 2025-06-02 17:14:23.163126 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-02 17:14:23.166725 | orchestrator | 2025-06-02 17:14:23.167529 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:14:23.167899 | orchestrator | Monday 02 June 2025 17:14:23 +0000 (0:00:00.088) 0:00:00.088 *********** 2025-06-02 17:14:24.590381 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:24.591222 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:24.592554 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:24.594179 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:24.594615 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:24.595682 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:24.596546 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:24.597496 | orchestrator | 2025-06-02 17:14:24.598100 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-02 17:14:24.598676 | orchestrator | Monday 02 June 2025 17:14:24 +0000 (0:00:01.429) 0:00:01.518 *********** 2025-06-02 17:14:25.895036 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:25.895201 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:25.895678 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:14:25.896517 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:14:25.896759 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:25.897941 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:25.899193 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:14:25.900110 | orchestrator | 2025-06-02 17:14:25.901052 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-02 17:14:25.901691 | orchestrator | 2025-06-02 17:14:25.902433 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:14:25.903474 | orchestrator | Monday 02 June 2025 17:14:25 +0000 (0:00:01.306) 0:00:02.825 *********** 2025-06-02 17:14:26.042653 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:26.042804 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:26.043374 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:26.046750 | orchestrator | 2025-06-02 17:14:26.046796 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:14:26.049093 | orchestrator | Monday 02 June 2025 17:14:26 +0000 (0:00:00.146) 0:00:02.972 *********** 2025-06-02 17:14:26.261295 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:26.262658 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:26.264809 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:26.269728 | orchestrator | 2025-06-02 17:14:26.269915 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:14:26.272702 | orchestrator | Monday 02 June 2025 17:14:26 +0000 (0:00:00.219) 0:00:03.191 *********** 2025-06-02 17:14:26.494281 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:26.495049 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:26.495429 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:26.496234 | orchestrator | 2025-06-02 17:14:26.497273 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:14:26.497764 | orchestrator | Monday 02 June 2025 17:14:26 +0000 (0:00:00.233) 0:00:03.424 *********** 2025-06-02 17:14:26.660080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:14:26.660240 | orchestrator | 2025-06-02 17:14:26.661222 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:14:26.662098 | orchestrator | Monday 02 June 2025 17:14:26 +0000 (0:00:00.165) 0:00:03.590 *********** 2025-06-02 17:14:27.133509 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:27.134440 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:27.134808 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:27.135831 | orchestrator | 2025-06-02 17:14:27.136974 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:14:27.137932 | orchestrator | Monday 02 June 2025 17:14:27 +0000 (0:00:00.474) 0:00:04.064 *********** 2025-06-02 17:14:27.264085 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:27.265223 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:27.267253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:27.267742 | orchestrator | 2025-06-02 17:14:27.268715 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:14:27.269767 | orchestrator | Monday 02 June 2025 17:14:27 +0000 (0:00:00.129) 0:00:04.194 *********** 2025-06-02 17:14:28.310345 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:28.311027 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:28.312184 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:28.313670 | orchestrator | 2025-06-02 17:14:28.314996 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:14:28.316073 | orchestrator | Monday 02 June 2025 17:14:28 +0000 (0:00:01.043) 0:00:05.237 *********** 2025-06-02 17:14:28.766721 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:28.768464 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:28.768632 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:28.770570 | orchestrator | 2025-06-02 17:14:28.772093 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:14:28.773263 | orchestrator | Monday 02 June 2025 17:14:28 +0000 (0:00:00.459) 0:00:05.696 *********** 2025-06-02 17:14:29.901967 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:29.902542 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:29.903004 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:29.903779 | orchestrator | 2025-06-02 17:14:29.904272 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:14:29.905272 | orchestrator | Monday 02 June 2025 17:14:29 +0000 (0:00:01.134) 0:00:06.831 *********** 2025-06-02 17:14:43.729837 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:43.730845 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:43.730881 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:43.730910 | orchestrator | 2025-06-02 17:14:43.730925 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-02 17:14:43.731213 | orchestrator | Monday 02 June 2025 17:14:43 +0000 (0:00:13.827) 0:00:20.658 *********** 2025-06-02 17:14:43.854653 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:14:43.854753 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:14:43.855747 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:14:43.857400 | orchestrator | 2025-06-02 17:14:43.858138 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-02 17:14:43.859233 | orchestrator | Monday 02 June 2025 17:14:43 +0000 (0:00:00.125) 0:00:20.784 *********** 2025-06-02 17:14:51.225868 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:14:51.229269 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:14:51.232820 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:14:51.233176 | orchestrator | 2025-06-02 17:14:51.233893 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-02 17:14:51.234665 | orchestrator | Monday 02 June 2025 17:14:51 +0000 (0:00:07.372) 0:00:28.156 *********** 2025-06-02 17:14:51.664019 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:51.665124 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:51.667147 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:51.667171 | orchestrator | 2025-06-02 17:14:51.667186 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-02 17:14:51.668074 | orchestrator | Monday 02 June 2025 17:14:51 +0000 (0:00:00.438) 0:00:28.594 *********** 2025-06-02 17:14:55.192880 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-02 17:14:55.193526 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-02 17:14:55.194326 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-02 17:14:55.196161 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:55.197370 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:55.198120 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-02 17:14:55.198727 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:55.199438 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:55.200555 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-02 17:14:55.201257 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:55.201943 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:55.202895 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-02 17:14:55.203141 | orchestrator | 2025-06-02 17:14:55.203541 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:14:55.203890 | orchestrator | Monday 02 June 2025 17:14:55 +0000 (0:00:03.527) 0:00:32.122 *********** 2025-06-02 17:14:56.304972 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:56.305062 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:56.305076 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:56.305089 | orchestrator | 2025-06-02 17:14:56.305102 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:14:56.308163 | orchestrator | 2025-06-02 17:14:56.308190 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:14:56.308203 | orchestrator | Monday 02 June 2025 17:14:56 +0000 (0:00:01.111) 0:00:33.234 *********** 2025-06-02 17:14:59.970783 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:14:59.970922 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:14:59.970938 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:14:59.971016 | orchestrator | ok: [testbed-manager] 2025-06-02 17:14:59.971108 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:14:59.971557 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:14:59.972031 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:14:59.972734 | orchestrator | 2025-06-02 17:14:59.972988 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:14:59.973446 | orchestrator | 2025-06-02 17:14:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:14:59.973481 | orchestrator | 2025-06-02 17:14:59 | INFO  | Please wait and do not abort execution. 2025-06-02 17:14:59.974497 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:59.976913 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:59.976988 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:59.977003 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:14:59.977483 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:59.977507 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:59.978187 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:14:59.978538 | orchestrator | 2025-06-02 17:14:59.979063 | orchestrator | 2025-06-02 17:14:59.982147 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:14:59.982177 | orchestrator | Monday 02 June 2025 17:14:59 +0000 (0:00:03.665) 0:00:36.899 *********** 2025-06-02 17:14:59.982189 | orchestrator | =============================================================================== 2025-06-02 17:14:59.982201 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.83s 2025-06-02 17:14:59.982213 | orchestrator | Install required packages (Debian) -------------------------------------- 7.37s 2025-06-02 17:14:59.982920 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.67s 2025-06-02 17:14:59.983264 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2025-06-02 17:14:59.983603 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2025-06-02 17:14:59.984346 | orchestrator | Copy fact file ---------------------------------------------------------- 1.31s 2025-06-02 17:14:59.984767 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.13s 2025-06-02 17:14:59.985091 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.11s 2025-06-02 17:14:59.985428 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.04s 2025-06-02 17:14:59.985936 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-06-02 17:14:59.986705 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-06-02 17:14:59.986908 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2025-06-02 17:14:59.987173 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2025-06-02 17:14:59.987721 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.22s 2025-06-02 17:14:59.988946 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2025-06-02 17:14:59.988968 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.15s 2025-06-02 17:14:59.989003 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.13s 2025-06-02 17:14:59.989016 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.13s 2025-06-02 17:15:00.543139 | orchestrator | + osism apply bootstrap 2025-06-02 17:15:02.312658 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:15:02.312751 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:15:02.312763 | orchestrator | Registering Redlock._release_script 2025-06-02 17:15:02.393021 | orchestrator | 2025-06-02 17:15:02 | INFO  | Task 4ae4a04c-12cd-488e-9ad0-d8359f5bd59f (bootstrap) was prepared for execution. 2025-06-02 17:15:02.393145 | orchestrator | 2025-06-02 17:15:02 | INFO  | It takes a moment until task 4ae4a04c-12cd-488e-9ad0-d8359f5bd59f (bootstrap) has been started and output is visible here. 2025-06-02 17:15:06.850461 | orchestrator | 2025-06-02 17:15:06.850690 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-02 17:15:06.851497 | orchestrator | 2025-06-02 17:15:06.852063 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-02 17:15:06.854188 | orchestrator | Monday 02 June 2025 17:15:06 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-02 17:15:06.954337 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:06.982935 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:07.019648 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:07.051380 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:07.155799 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:07.155969 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:07.155987 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:07.156329 | orchestrator | 2025-06-02 17:15:07.156922 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:15:07.157882 | orchestrator | 2025-06-02 17:15:07.160875 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:15:07.160935 | orchestrator | Monday 02 June 2025 17:15:07 +0000 (0:00:00.309) 0:00:00.483 *********** 2025-06-02 17:15:10.923865 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:10.924373 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:10.924596 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:10.925075 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:10.925559 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:10.928412 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:10.928962 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:10.929713 | orchestrator | 2025-06-02 17:15:10.930623 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-02 17:15:10.933116 | orchestrator | 2025-06-02 17:15:10.933781 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:15:10.934783 | orchestrator | Monday 02 June 2025 17:15:10 +0000 (0:00:03.767) 0:00:04.251 *********** 2025-06-02 17:15:11.020995 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-02 17:15:11.021106 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:15:11.021192 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-02 17:15:11.061447 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-02 17:15:11.062079 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-02 17:15:11.062546 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:15:11.063255 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-02 17:15:11.096908 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:15:11.097494 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-02 17:15:11.099625 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-02 17:15:11.100074 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-02 17:15:11.100427 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:15:11.125728 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:11.127406 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 17:15:11.127752 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 17:15:11.413738 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-02 17:15:11.414593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:15:11.415303 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 17:15:11.416794 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-02 17:15:11.417639 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-02 17:15:11.418377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:15:11.419048 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 17:15:11.419600 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-02 17:15:11.420647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:15:11.421260 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-02 17:15:11.421663 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:11.422534 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-02 17:15:11.423225 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 17:15:11.424070 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:15:11.424509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:15:11.425062 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:11.425953 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-02 17:15:11.426584 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 17:15:11.426911 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:15:11.427763 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:15:11.428995 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:15:11.429761 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-02 17:15:11.430221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:15:11.430909 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:15:11.431686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:15:11.432355 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-02 17:15:11.432659 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:15:11.433394 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:15:11.434079 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-02 17:15:11.434347 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:11.435092 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:11.435544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:15:11.436405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-02 17:15:11.436677 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:15:11.437144 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-02 17:15:11.437664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-02 17:15:11.438001 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-02 17:15:11.438468 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-02 17:15:11.440600 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:11.444322 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-02 17:15:11.444362 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:11.444369 | orchestrator | 2025-06-02 17:15:11.444413 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-02 17:15:11.445519 | orchestrator | 2025-06-02 17:15:11.446581 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-02 17:15:11.447156 | orchestrator | Monday 02 June 2025 17:15:11 +0000 (0:00:00.489) 0:00:04.740 *********** 2025-06-02 17:15:12.674548 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:12.677188 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:12.677236 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:12.677499 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:12.678586 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:12.679047 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:12.680121 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:12.680641 | orchestrator | 2025-06-02 17:15:12.681708 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-02 17:15:12.682533 | orchestrator | Monday 02 June 2025 17:15:12 +0000 (0:00:01.260) 0:00:06.001 *********** 2025-06-02 17:15:14.004777 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:14.005193 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:14.005610 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:14.008831 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:14.009328 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:14.010147 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:14.010418 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:14.011073 | orchestrator | 2025-06-02 17:15:14.011588 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-02 17:15:14.012017 | orchestrator | Monday 02 June 2025 17:15:13 +0000 (0:00:01.325) 0:00:07.326 *********** 2025-06-02 17:15:14.291423 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:14.294548 | orchestrator | 2025-06-02 17:15:14.294633 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-02 17:15:14.294649 | orchestrator | Monday 02 June 2025 17:15:14 +0000 (0:00:00.289) 0:00:07.616 *********** 2025-06-02 17:15:16.580962 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:16.582208 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:16.583182 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:16.584344 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:16.589251 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:16.589952 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:16.591107 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:16.591726 | orchestrator | 2025-06-02 17:15:16.592850 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-02 17:15:16.596152 | orchestrator | Monday 02 June 2025 17:15:16 +0000 (0:00:02.289) 0:00:09.905 *********** 2025-06-02 17:15:16.702625 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:16.948540 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:16.949868 | orchestrator | 2025-06-02 17:15:16.950741 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-02 17:15:16.952141 | orchestrator | Monday 02 June 2025 17:15:16 +0000 (0:00:00.368) 0:00:10.273 *********** 2025-06-02 17:15:18.023354 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:18.023459 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:18.025545 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:18.026143 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:18.027141 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:18.028207 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:18.028680 | orchestrator | 2025-06-02 17:15:18.029077 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-02 17:15:18.029846 | orchestrator | Monday 02 June 2025 17:15:18 +0000 (0:00:01.074) 0:00:11.348 *********** 2025-06-02 17:15:18.101341 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:18.640691 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:18.641108 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:18.642517 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:18.645560 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:18.645647 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:18.645662 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:18.645675 | orchestrator | 2025-06-02 17:15:18.645689 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-02 17:15:18.645702 | orchestrator | Monday 02 June 2025 17:15:18 +0000 (0:00:00.619) 0:00:11.967 *********** 2025-06-02 17:15:18.739485 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:18.768996 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:18.789471 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:19.108159 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:19.108331 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:19.108347 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:19.109383 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:19.111042 | orchestrator | 2025-06-02 17:15:19.112083 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-02 17:15:19.112996 | orchestrator | Monday 02 June 2025 17:15:19 +0000 (0:00:00.463) 0:00:12.430 *********** 2025-06-02 17:15:19.182231 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:19.213871 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:19.237834 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:19.264251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:19.347462 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:19.347873 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:19.349312 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:19.350479 | orchestrator | 2025-06-02 17:15:19.352005 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-02 17:15:19.352332 | orchestrator | Monday 02 June 2025 17:15:19 +0000 (0:00:00.241) 0:00:12.672 *********** 2025-06-02 17:15:19.667723 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:19.670084 | orchestrator | 2025-06-02 17:15:19.670312 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-02 17:15:19.670888 | orchestrator | Monday 02 June 2025 17:15:19 +0000 (0:00:00.320) 0:00:12.992 *********** 2025-06-02 17:15:20.018572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:20.020760 | orchestrator | 2025-06-02 17:15:20.023297 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-02 17:15:20.023334 | orchestrator | Monday 02 June 2025 17:15:20 +0000 (0:00:00.352) 0:00:13.345 *********** 2025-06-02 17:15:21.272929 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:21.274503 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:21.275488 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:21.276670 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:21.277368 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:21.278131 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:21.278902 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:21.279304 | orchestrator | 2025-06-02 17:15:21.279934 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-02 17:15:21.280856 | orchestrator | Monday 02 June 2025 17:15:21 +0000 (0:00:01.250) 0:00:14.595 *********** 2025-06-02 17:15:21.341930 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:21.396019 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:21.435527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:21.469726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:21.532172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:21.536642 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:21.536711 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:21.537188 | orchestrator | 2025-06-02 17:15:21.538777 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-02 17:15:21.539911 | orchestrator | Monday 02 June 2025 17:15:21 +0000 (0:00:00.262) 0:00:14.858 *********** 2025-06-02 17:15:22.088222 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:22.088691 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:22.090395 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:22.091663 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:22.093019 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:22.093799 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:22.095636 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:22.096232 | orchestrator | 2025-06-02 17:15:22.097676 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-02 17:15:22.098529 | orchestrator | Monday 02 June 2025 17:15:22 +0000 (0:00:00.554) 0:00:15.413 *********** 2025-06-02 17:15:22.198376 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:22.225582 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:22.257663 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:22.356711 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:22.357155 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:22.358597 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:22.361179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:22.361203 | orchestrator | 2025-06-02 17:15:22.362697 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-02 17:15:22.363381 | orchestrator | Monday 02 June 2025 17:15:22 +0000 (0:00:00.269) 0:00:15.682 *********** 2025-06-02 17:15:22.954079 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:22.954183 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:22.954312 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:22.954902 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:22.955175 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:22.955451 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:22.956939 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:22.956966 | orchestrator | 2025-06-02 17:15:22.957517 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-02 17:15:22.957615 | orchestrator | Monday 02 June 2025 17:15:22 +0000 (0:00:00.596) 0:00:16.279 *********** 2025-06-02 17:15:24.085492 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:24.087290 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:24.088300 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:24.089744 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:24.089813 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:24.090150 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:24.090911 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:24.091423 | orchestrator | 2025-06-02 17:15:24.091691 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-02 17:15:24.092686 | orchestrator | Monday 02 June 2025 17:15:24 +0000 (0:00:01.131) 0:00:17.411 *********** 2025-06-02 17:15:25.250470 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:25.250679 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:25.251248 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:25.251848 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:25.252577 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:25.253119 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:25.253750 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:25.255380 | orchestrator | 2025-06-02 17:15:25.255722 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-02 17:15:25.256305 | orchestrator | Monday 02 June 2025 17:15:25 +0000 (0:00:01.165) 0:00:18.576 *********** 2025-06-02 17:15:25.696057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:25.696519 | orchestrator | 2025-06-02 17:15:25.699603 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-02 17:15:25.699652 | orchestrator | Monday 02 June 2025 17:15:25 +0000 (0:00:00.445) 0:00:19.021 *********** 2025-06-02 17:15:25.792319 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:27.021640 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:27.024463 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:27.024515 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:27.025129 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:27.026408 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:27.026983 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:27.028070 | orchestrator | 2025-06-02 17:15:27.029269 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-02 17:15:27.030307 | orchestrator | Monday 02 June 2025 17:15:27 +0000 (0:00:01.324) 0:00:20.346 *********** 2025-06-02 17:15:27.100561 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:27.138241 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:27.169955 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:27.193429 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:27.284954 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:27.286328 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:27.290653 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:27.291895 | orchestrator | 2025-06-02 17:15:27.292759 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-02 17:15:27.293776 | orchestrator | Monday 02 June 2025 17:15:27 +0000 (0:00:00.264) 0:00:20.611 *********** 2025-06-02 17:15:27.385872 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:27.410370 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:27.448201 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:27.519428 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:27.520121 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:27.521164 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:27.522510 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:27.522924 | orchestrator | 2025-06-02 17:15:27.524649 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-02 17:15:27.525630 | orchestrator | Monday 02 June 2025 17:15:27 +0000 (0:00:00.234) 0:00:20.845 *********** 2025-06-02 17:15:27.631405 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:27.658979 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:27.687554 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:27.716919 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:27.797424 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:27.797519 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:27.797621 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:27.799584 | orchestrator | 2025-06-02 17:15:27.799611 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-02 17:15:27.799625 | orchestrator | Monday 02 June 2025 17:15:27 +0000 (0:00:00.277) 0:00:21.123 *********** 2025-06-02 17:15:28.108140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:28.111665 | orchestrator | 2025-06-02 17:15:28.112079 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-02 17:15:28.112582 | orchestrator | Monday 02 June 2025 17:15:28 +0000 (0:00:00.308) 0:00:21.432 *********** 2025-06-02 17:15:28.656308 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:28.656457 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:28.660088 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:28.660125 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:28.660133 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:28.660641 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:28.661706 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:28.663173 | orchestrator | 2025-06-02 17:15:28.663728 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-02 17:15:28.664553 | orchestrator | Monday 02 June 2025 17:15:28 +0000 (0:00:00.549) 0:00:21.982 *********** 2025-06-02 17:15:28.773229 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:28.798675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:28.828316 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:28.907617 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:28.908996 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:28.910330 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:28.911182 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:28.912439 | orchestrator | 2025-06-02 17:15:28.913115 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-02 17:15:28.913778 | orchestrator | Monday 02 June 2025 17:15:28 +0000 (0:00:00.251) 0:00:22.234 *********** 2025-06-02 17:15:29.984167 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:29.984694 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:29.984725 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:29.985191 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:29.985216 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:29.989333 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:29.989727 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:29.989761 | orchestrator | 2025-06-02 17:15:29.989976 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-02 17:15:29.990322 | orchestrator | Monday 02 June 2025 17:15:29 +0000 (0:00:01.075) 0:00:23.309 *********** 2025-06-02 17:15:30.666775 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:30.667201 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:30.669026 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:30.669997 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:30.670625 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:30.672242 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:30.673125 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:30.674194 | orchestrator | 2025-06-02 17:15:30.676133 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-02 17:15:30.676922 | orchestrator | Monday 02 June 2025 17:15:30 +0000 (0:00:00.681) 0:00:23.991 *********** 2025-06-02 17:15:31.912209 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:31.912967 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:31.914665 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:31.917178 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:31.918130 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:31.919318 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:31.920457 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:31.921285 | orchestrator | 2025-06-02 17:15:31.921956 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-02 17:15:31.922533 | orchestrator | Monday 02 June 2025 17:15:31 +0000 (0:00:01.245) 0:00:25.237 *********** 2025-06-02 17:15:45.498731 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:45.498852 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:45.498869 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:45.501288 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:45.502644 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:45.502932 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:45.503633 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:45.504043 | orchestrator | 2025-06-02 17:15:45.505153 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-02 17:15:45.506316 | orchestrator | Monday 02 June 2025 17:15:45 +0000 (0:00:13.585) 0:00:38.822 *********** 2025-06-02 17:15:45.579847 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:45.605416 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:45.635316 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:45.662308 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:45.736188 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:45.737644 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:45.739841 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:45.740767 | orchestrator | 2025-06-02 17:15:45.742359 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-02 17:15:45.742914 | orchestrator | Monday 02 June 2025 17:15:45 +0000 (0:00:00.240) 0:00:39.063 *********** 2025-06-02 17:15:45.816546 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:45.850979 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:45.885894 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:45.920495 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:45.991438 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:45.993158 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:45.994672 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:45.996171 | orchestrator | 2025-06-02 17:15:45.998070 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-02 17:15:45.998570 | orchestrator | Monday 02 June 2025 17:15:45 +0000 (0:00:00.254) 0:00:39.317 *********** 2025-06-02 17:15:46.094691 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:46.139936 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:46.165334 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:46.202399 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:46.278515 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:46.280139 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:46.282396 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:46.282941 | orchestrator | 2025-06-02 17:15:46.283965 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-02 17:15:46.285255 | orchestrator | Monday 02 June 2025 17:15:46 +0000 (0:00:00.286) 0:00:39.604 *********** 2025-06-02 17:15:46.616339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:46.620101 | orchestrator | 2025-06-02 17:15:46.620332 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-02 17:15:46.622494 | orchestrator | Monday 02 June 2025 17:15:46 +0000 (0:00:00.335) 0:00:39.940 *********** 2025-06-02 17:15:48.458468 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:48.459748 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:48.461625 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:48.461881 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:48.462984 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:48.464442 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:48.465793 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:48.466095 | orchestrator | 2025-06-02 17:15:48.467035 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-02 17:15:48.467982 | orchestrator | Monday 02 June 2025 17:15:48 +0000 (0:00:01.842) 0:00:41.782 *********** 2025-06-02 17:15:49.614698 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:49.615663 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:49.615715 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:49.617579 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:49.618472 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:49.619468 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:49.620298 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:49.621083 | orchestrator | 2025-06-02 17:15:49.621770 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-02 17:15:49.622811 | orchestrator | Monday 02 June 2025 17:15:49 +0000 (0:00:01.154) 0:00:42.936 *********** 2025-06-02 17:15:50.544462 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:15:50.546847 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:15:50.547701 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:15:50.548680 | orchestrator | ok: [testbed-manager] 2025-06-02 17:15:50.549710 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:15:50.550632 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:15:50.551111 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:15:50.551970 | orchestrator | 2025-06-02 17:15:50.552630 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-02 17:15:50.553016 | orchestrator | Monday 02 June 2025 17:15:50 +0000 (0:00:00.933) 0:00:43.870 *********** 2025-06-02 17:15:50.926631 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:15:50.927134 | orchestrator | 2025-06-02 17:15:50.927158 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-02 17:15:50.928019 | orchestrator | Monday 02 June 2025 17:15:50 +0000 (0:00:00.382) 0:00:44.252 *********** 2025-06-02 17:15:52.011201 | orchestrator | changed: [testbed-manager] 2025-06-02 17:15:52.011905 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:15:52.015556 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:15:52.015609 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:15:52.015621 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:15:52.015632 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:15:52.016034 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:15:52.017458 | orchestrator | 2025-06-02 17:15:52.019332 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-02 17:15:52.019369 | orchestrator | Monday 02 June 2025 17:15:51 +0000 (0:00:01.081) 0:00:45.334 *********** 2025-06-02 17:15:52.098554 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:15:52.127188 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:15:52.155522 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:15:52.183858 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:15:52.384911 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:15:52.385668 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:15:52.386851 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:15:52.388031 | orchestrator | 2025-06-02 17:15:52.388918 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-02 17:15:52.389652 | orchestrator | Monday 02 June 2025 17:15:52 +0000 (0:00:00.377) 0:00:45.711 *********** 2025-06-02 17:16:06.020723 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:16:06.020886 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:16:06.020914 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:16:06.020934 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:16:06.020953 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:16:06.021008 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:16:06.021026 | orchestrator | changed: [testbed-manager] 2025-06-02 17:16:06.021038 | orchestrator | 2025-06-02 17:16:06.021175 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-02 17:16:06.021359 | orchestrator | Monday 02 June 2025 17:16:06 +0000 (0:00:13.629) 0:00:59.340 *********** 2025-06-02 17:16:07.672161 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:07.673407 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:07.675950 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:07.676679 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:07.678320 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:07.679598 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:07.680555 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:07.681774 | orchestrator | 2025-06-02 17:16:07.682462 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-02 17:16:07.683121 | orchestrator | Monday 02 June 2025 17:16:07 +0000 (0:00:01.656) 0:01:00.997 *********** 2025-06-02 17:16:08.692992 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:08.694195 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:08.694979 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:08.696192 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:08.698135 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:08.698401 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:08.699533 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:08.700346 | orchestrator | 2025-06-02 17:16:08.701149 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-02 17:16:08.701777 | orchestrator | Monday 02 June 2025 17:16:08 +0000 (0:00:01.020) 0:01:02.018 *********** 2025-06-02 17:16:08.783008 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:08.825732 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:08.858566 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:08.890990 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:08.986149 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:08.986458 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:08.987425 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:08.988262 | orchestrator | 2025-06-02 17:16:08.990348 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-02 17:16:08.991422 | orchestrator | Monday 02 June 2025 17:16:08 +0000 (0:00:00.293) 0:01:02.311 *********** 2025-06-02 17:16:09.110700 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:09.137189 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:09.176709 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:09.206288 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:09.274291 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:09.274636 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:09.276004 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:09.276797 | orchestrator | 2025-06-02 17:16:09.277944 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-02 17:16:09.278546 | orchestrator | Monday 02 June 2025 17:16:09 +0000 (0:00:00.287) 0:01:02.598 *********** 2025-06-02 17:16:09.603029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:16:09.604569 | orchestrator | 2025-06-02 17:16:09.605304 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-02 17:16:09.605941 | orchestrator | Monday 02 June 2025 17:16:09 +0000 (0:00:00.330) 0:01:02.928 *********** 2025-06-02 17:16:11.185327 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:11.185752 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:11.186296 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:11.186592 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:11.186999 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:11.187448 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:11.188509 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:11.189037 | orchestrator | 2025-06-02 17:16:11.189687 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-02 17:16:11.190569 | orchestrator | Monday 02 June 2025 17:16:11 +0000 (0:00:01.582) 0:01:04.511 *********** 2025-06-02 17:16:11.766672 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:16:11.766819 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:16:11.766904 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:16:11.767389 | orchestrator | changed: [testbed-manager] 2025-06-02 17:16:11.768381 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:16:11.769154 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:16:11.769641 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:16:11.770402 | orchestrator | 2025-06-02 17:16:11.771152 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-02 17:16:11.771565 | orchestrator | Monday 02 June 2025 17:16:11 +0000 (0:00:00.581) 0:01:05.092 *********** 2025-06-02 17:16:11.848696 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:11.877583 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:11.907255 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:11.937367 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:12.005847 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:12.007049 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:12.008868 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:12.009610 | orchestrator | 2025-06-02 17:16:12.011078 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-02 17:16:12.011944 | orchestrator | Monday 02 June 2025 17:16:11 +0000 (0:00:00.239) 0:01:05.332 *********** 2025-06-02 17:16:13.189738 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:13.189836 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:13.189852 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:13.190817 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:13.192695 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:13.193639 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:13.195094 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:13.195931 | orchestrator | 2025-06-02 17:16:13.197402 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-02 17:16:13.198075 | orchestrator | Monday 02 June 2025 17:16:13 +0000 (0:00:01.178) 0:01:06.511 *********** 2025-06-02 17:16:14.859402 | orchestrator | changed: [testbed-manager] 2025-06-02 17:16:14.859895 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:16:14.860706 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:16:14.861145 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:16:14.861893 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:16:14.863855 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:16:14.863905 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:16:14.864647 | orchestrator | 2025-06-02 17:16:14.865405 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-02 17:16:14.865813 | orchestrator | Monday 02 June 2025 17:16:14 +0000 (0:00:01.672) 0:01:08.183 *********** 2025-06-02 17:16:17.798673 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:17.798789 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:17.799898 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:17.801850 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:17.803804 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:17.804932 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:17.806308 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:17.807642 | orchestrator | 2025-06-02 17:16:17.808773 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-02 17:16:17.809621 | orchestrator | Monday 02 June 2025 17:16:17 +0000 (0:00:02.936) 0:01:11.120 *********** 2025-06-02 17:16:54.101662 | orchestrator | ok: [testbed-manager] 2025-06-02 17:16:54.102798 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:16:54.102876 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:16:54.104139 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:16:54.105093 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:16:54.106102 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:16:54.107057 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:16:54.108343 | orchestrator | 2025-06-02 17:16:54.109179 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-02 17:16:54.109599 | orchestrator | Monday 02 June 2025 17:16:54 +0000 (0:00:36.302) 0:01:47.422 *********** 2025-06-02 17:18:12.195599 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:12.195737 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:12.195843 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:12.195864 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:12.196381 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:12.196905 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:12.199471 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:12.199960 | orchestrator | 2025-06-02 17:18:12.200692 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-02 17:18:12.201527 | orchestrator | Monday 02 June 2025 17:18:12 +0000 (0:01:18.093) 0:03:05.516 *********** 2025-06-02 17:18:13.990583 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:13.992595 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:13.993464 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:13.994305 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:13.994676 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:13.997655 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:13.998883 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:14.000461 | orchestrator | 2025-06-02 17:18:14.001201 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-02 17:18:14.002507 | orchestrator | Monday 02 June 2025 17:18:13 +0000 (0:00:01.798) 0:03:07.315 *********** 2025-06-02 17:18:27.058633 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:27.061444 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:27.061475 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:27.061486 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:27.063118 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:27.064580 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:27.065317 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:27.065924 | orchestrator | 2025-06-02 17:18:27.067337 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-02 17:18:27.068140 | orchestrator | Monday 02 June 2025 17:18:27 +0000 (0:00:13.066) 0:03:20.381 *********** 2025-06-02 17:18:27.475704 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-02 17:18:27.477198 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-02 17:18:27.478452 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-02 17:18:27.480539 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-02 17:18:27.482509 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-02 17:18:27.483930 | orchestrator | 2025-06-02 17:18:27.485492 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-02 17:18:27.489507 | orchestrator | Monday 02 June 2025 17:18:27 +0000 (0:00:00.419) 0:03:20.801 *********** 2025-06-02 17:18:27.532135 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:18:27.558980 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:27.645433 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:18:29.089003 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:18:29.089313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:29.091345 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:29.094616 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-02 17:18:29.095524 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:29.096073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:18:29.097346 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:18:29.097492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:18:29.098664 | orchestrator | 2025-06-02 17:18:29.098926 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-02 17:18:29.099669 | orchestrator | Monday 02 June 2025 17:18:29 +0000 (0:00:01.612) 0:03:22.414 *********** 2025-06-02 17:18:29.149483 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:18:29.149739 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:18:29.150430 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:18:29.151361 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:18:29.185458 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:18:29.187266 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:18:29.187732 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:18:29.188530 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:18:29.189082 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:18:29.189559 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:18:29.212817 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:29.273634 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:18:29.275617 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:18:29.276263 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:18:34.833713 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:18:34.834596 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:18:34.835328 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:18:34.835890 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:18:34.836565 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:18:34.837618 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:18:34.839074 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:18:34.839742 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:18:34.840566 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:18:34.843093 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:18:34.845043 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:18:34.845510 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:18:34.846833 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:18:34.848126 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:18:34.849560 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:18:34.851000 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:18:34.851906 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:34.852865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:18:34.853718 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:34.854410 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-02 17:18:34.855215 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-02 17:18:34.855928 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-02 17:18:34.856657 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-02 17:18:34.857336 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-02 17:18:34.857762 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-02 17:18:34.859045 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-02 17:18:34.859068 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-02 17:18:34.859353 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-02 17:18:34.859796 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-02 17:18:34.860146 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:34.860912 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:18:34.864187 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:18:34.864255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-02 17:18:34.864267 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:18:34.864279 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:18:34.864290 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-02 17:18:34.864301 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:18:34.864312 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:18:34.864337 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-02 17:18:34.864350 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:18:34.864362 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:18:34.864441 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-02 17:18:34.864688 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:18:34.864943 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:18:34.865263 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:18:34.865505 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:18:34.866087 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:18:34.866181 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:18:34.866569 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:18:34.866854 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-02 17:18:34.867105 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:18:34.867483 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:18:34.867640 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-02 17:18:34.868038 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:18:34.868251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:18:34.868555 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-02 17:18:34.868846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:18:34.869120 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-02 17:18:34.869425 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-02 17:18:34.869716 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-02 17:18:34.870109 | orchestrator | 2025-06-02 17:18:34.870364 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-02 17:18:34.870532 | orchestrator | Monday 02 June 2025 17:18:34 +0000 (0:00:05.743) 0:03:28.157 *********** 2025-06-02 17:18:35.481079 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.481381 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.482477 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.484934 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.486173 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.486281 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.486976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-02 17:18:35.487591 | orchestrator | 2025-06-02 17:18:35.488367 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-02 17:18:35.489004 | orchestrator | Monday 02 June 2025 17:18:35 +0000 (0:00:00.649) 0:03:28.807 *********** 2025-06-02 17:18:35.548741 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:18:35.600570 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:35.600935 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:18:35.601859 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:18:35.629206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:35.662445 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:35.662543 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-02 17:18:35.687619 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:36.103589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:18:36.103691 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:18:36.107003 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-02 17:18:36.107033 | orchestrator | 2025-06-02 17:18:36.107408 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-02 17:18:36.108050 | orchestrator | Monday 02 June 2025 17:18:36 +0000 (0:00:00.620) 0:03:29.428 *********** 2025-06-02 17:18:36.169668 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:18:36.206678 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:36.207387 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:18:36.208335 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:18:36.236993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:36.270639 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:36.270830 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-02 17:18:36.294680 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:36.792485 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:18:36.792584 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:18:36.792599 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-02 17:18:36.792673 | orchestrator | 2025-06-02 17:18:36.793118 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-02 17:18:36.793667 | orchestrator | Monday 02 June 2025 17:18:36 +0000 (0:00:00.690) 0:03:30.118 *********** 2025-06-02 17:18:36.887175 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:36.919439 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:36.959414 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:36.988572 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:37.118954 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:37.119861 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:37.120616 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:37.122523 | orchestrator | 2025-06-02 17:18:37.123396 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-02 17:18:37.124031 | orchestrator | Monday 02 June 2025 17:18:37 +0000 (0:00:00.326) 0:03:30.444 *********** 2025-06-02 17:18:42.896711 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:42.898213 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:42.900618 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:42.901708 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:42.902533 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:42.903710 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:42.904369 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:42.904792 | orchestrator | 2025-06-02 17:18:42.905579 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-02 17:18:42.906200 | orchestrator | Monday 02 June 2025 17:18:42 +0000 (0:00:05.777) 0:03:36.222 *********** 2025-06-02 17:18:42.987444 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-02 17:18:42.987523 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-02 17:18:43.038745 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:43.039669 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-02 17:18:43.080393 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:43.125797 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:43.126593 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-02 17:18:43.126624 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-02 17:18:43.168045 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:43.169322 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-02 17:18:43.237129 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:43.238161 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:43.239019 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-02 17:18:43.240093 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:43.241029 | orchestrator | 2025-06-02 17:18:43.241730 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-02 17:18:43.241998 | orchestrator | Monday 02 June 2025 17:18:43 +0000 (0:00:00.341) 0:03:36.564 *********** 2025-06-02 17:18:44.381588 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-02 17:18:44.381920 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-02 17:18:44.383735 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-02 17:18:44.387021 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-02 17:18:44.387069 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-02 17:18:44.387083 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-02 17:18:44.387095 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-02 17:18:44.387138 | orchestrator | 2025-06-02 17:18:44.387305 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-02 17:18:44.388318 | orchestrator | Monday 02 June 2025 17:18:44 +0000 (0:00:01.143) 0:03:37.707 *********** 2025-06-02 17:18:44.980091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:18:44.980676 | orchestrator | 2025-06-02 17:18:44.981583 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-02 17:18:44.982327 | orchestrator | Monday 02 June 2025 17:18:44 +0000 (0:00:00.593) 0:03:38.301 *********** 2025-06-02 17:18:46.354825 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:46.355627 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:46.356413 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:46.359529 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:46.361214 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:46.362507 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:46.363438 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:46.364178 | orchestrator | 2025-06-02 17:18:46.364974 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-02 17:18:46.366137 | orchestrator | Monday 02 June 2025 17:18:46 +0000 (0:00:01.376) 0:03:39.678 *********** 2025-06-02 17:18:47.050298 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:47.050474 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:47.051942 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:47.053154 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:47.054774 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:47.055352 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:47.056551 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:47.057180 | orchestrator | 2025-06-02 17:18:47.058173 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-02 17:18:47.058551 | orchestrator | Monday 02 June 2025 17:18:47 +0000 (0:00:00.695) 0:03:40.374 *********** 2025-06-02 17:18:47.693321 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:47.693454 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:47.694356 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:47.697053 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:47.697090 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:47.697236 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:47.697839 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:47.699994 | orchestrator | 2025-06-02 17:18:47.701402 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-02 17:18:47.702183 | orchestrator | Monday 02 June 2025 17:18:47 +0000 (0:00:00.643) 0:03:41.018 *********** 2025-06-02 17:18:48.440490 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:48.441754 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:48.442558 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:48.443583 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:48.444327 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:48.445403 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:48.445908 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:48.446471 | orchestrator | 2025-06-02 17:18:48.446800 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-02 17:18:48.447342 | orchestrator | Monday 02 June 2025 17:18:48 +0000 (0:00:00.748) 0:03:41.766 *********** 2025-06-02 17:18:49.505659 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883385.024534, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.505795 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883392.8368747, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.508742 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883395.459359, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.510104 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883402.3735578, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.511097 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883332.8717935, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.512166 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883477.3151028, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.512999 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748883394.7932196, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.513738 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883282.6498914, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.514768 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883288.4585805, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.514981 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883292.2638373, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.516445 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883294.8484282, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.516841 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883360.3113363, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.517586 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883371.5834846, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.517776 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748883287.6188235, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:18:49.518677 | orchestrator | 2025-06-02 17:18:49.518900 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-02 17:18:49.519366 | orchestrator | Monday 02 June 2025 17:18:49 +0000 (0:00:01.063) 0:03:42.829 *********** 2025-06-02 17:18:50.693982 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:50.696193 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:50.698882 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:50.700247 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:50.700873 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:50.702116 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:50.703013 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:50.703768 | orchestrator | 2025-06-02 17:18:50.704619 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-02 17:18:50.705449 | orchestrator | Monday 02 June 2025 17:18:50 +0000 (0:00:01.186) 0:03:44.016 *********** 2025-06-02 17:18:51.894737 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:51.894973 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:51.896034 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:51.896367 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:51.897046 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:51.898183 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:51.898900 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:51.899491 | orchestrator | 2025-06-02 17:18:51.899835 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-02 17:18:51.900944 | orchestrator | Monday 02 June 2025 17:18:51 +0000 (0:00:01.202) 0:03:45.219 *********** 2025-06-02 17:18:53.037365 | orchestrator | changed: [testbed-manager] 2025-06-02 17:18:53.038668 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:18:53.039403 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:18:53.041059 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:18:53.041809 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:18:53.042903 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:18:53.043603 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:18:53.044447 | orchestrator | 2025-06-02 17:18:53.044870 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-02 17:18:53.045523 | orchestrator | Monday 02 June 2025 17:18:53 +0000 (0:00:01.143) 0:03:46.363 *********** 2025-06-02 17:18:53.152418 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:18:53.207909 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:18:53.246891 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:18:53.283814 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:18:53.363671 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:18:53.363828 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:18:53.363940 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:18:53.364505 | orchestrator | 2025-06-02 17:18:53.365328 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-02 17:18:53.366579 | orchestrator | Monday 02 June 2025 17:18:53 +0000 (0:00:00.326) 0:03:46.690 *********** 2025-06-02 17:18:54.125449 | orchestrator | ok: [testbed-manager] 2025-06-02 17:18:54.125680 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:18:54.127439 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:18:54.127909 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:18:54.129495 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:18:54.131119 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:18:54.132012 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:18:54.132847 | orchestrator | 2025-06-02 17:18:54.133267 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-02 17:18:54.134143 | orchestrator | Monday 02 June 2025 17:18:54 +0000 (0:00:00.759) 0:03:47.449 *********** 2025-06-02 17:18:54.568674 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:18:54.569677 | orchestrator | 2025-06-02 17:18:54.573388 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-02 17:18:54.573450 | orchestrator | Monday 02 June 2025 17:18:54 +0000 (0:00:00.444) 0:03:47.894 *********** 2025-06-02 17:19:03.050606 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:03.051274 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:03.052818 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:03.055138 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:03.056649 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:03.057401 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:03.057911 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:03.058692 | orchestrator | 2025-06-02 17:19:03.060297 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-02 17:19:03.060821 | orchestrator | Monday 02 June 2025 17:19:03 +0000 (0:00:08.480) 0:03:56.374 *********** 2025-06-02 17:19:04.365531 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:04.365792 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:04.367692 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:04.367727 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:04.368024 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:04.369172 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:04.370385 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:04.372684 | orchestrator | 2025-06-02 17:19:04.373823 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-02 17:19:04.375496 | orchestrator | Monday 02 June 2025 17:19:04 +0000 (0:00:01.316) 0:03:57.691 *********** 2025-06-02 17:19:06.211444 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:06.212500 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:06.214376 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:06.216594 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:06.217378 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:06.217986 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:06.218917 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:06.219860 | orchestrator | 2025-06-02 17:19:06.220310 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-02 17:19:06.221029 | orchestrator | Monday 02 June 2025 17:19:06 +0000 (0:00:01.844) 0:03:59.535 *********** 2025-06-02 17:19:06.749098 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:19:06.749454 | orchestrator | 2025-06-02 17:19:06.750627 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-02 17:19:06.752073 | orchestrator | Monday 02 June 2025 17:19:06 +0000 (0:00:00.539) 0:04:00.075 *********** 2025-06-02 17:19:15.582288 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:15.582411 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:15.583313 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:15.585878 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:15.587415 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:15.588991 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:15.590987 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:15.591098 | orchestrator | 2025-06-02 17:19:15.592010 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-02 17:19:15.592681 | orchestrator | Monday 02 June 2025 17:19:15 +0000 (0:00:08.831) 0:04:08.906 *********** 2025-06-02 17:19:16.210790 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:16.211726 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:16.213369 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:16.214465 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:16.215761 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:16.215854 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:16.217129 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:16.218191 | orchestrator | 2025-06-02 17:19:16.219023 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-02 17:19:16.220336 | orchestrator | Monday 02 June 2025 17:19:16 +0000 (0:00:00.630) 0:04:09.536 *********** 2025-06-02 17:19:17.367301 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:17.371601 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:17.372480 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:17.373246 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:17.373825 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:17.374284 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:17.375029 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:17.377327 | orchestrator | 2025-06-02 17:19:17.377417 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-02 17:19:17.377434 | orchestrator | Monday 02 June 2025 17:19:17 +0000 (0:00:01.154) 0:04:10.691 *********** 2025-06-02 17:19:18.466383 | orchestrator | changed: [testbed-manager] 2025-06-02 17:19:18.469483 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:19:18.471672 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:19:18.471718 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:19:18.472849 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:19:18.476050 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:19:18.478580 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:19:18.481371 | orchestrator | 2025-06-02 17:19:18.482259 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-02 17:19:18.484016 | orchestrator | Monday 02 June 2025 17:19:18 +0000 (0:00:01.099) 0:04:11.791 *********** 2025-06-02 17:19:18.579700 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:18.618704 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:18.672812 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:18.712541 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:18.794831 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:18.795028 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:18.795262 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:18.795772 | orchestrator | 2025-06-02 17:19:18.796502 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-02 17:19:18.797377 | orchestrator | Monday 02 June 2025 17:19:18 +0000 (0:00:00.329) 0:04:12.120 *********** 2025-06-02 17:19:18.893971 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:18.932993 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:19.007344 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:19.050346 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:19.135867 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:19.137100 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:19.143882 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:19.143920 | orchestrator | 2025-06-02 17:19:19.143929 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-02 17:19:19.143938 | orchestrator | Monday 02 June 2025 17:19:19 +0000 (0:00:00.342) 0:04:12.462 *********** 2025-06-02 17:19:19.270929 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:19.307499 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:19.345810 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:19.379832 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:19.487980 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:19.488518 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:19.488815 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:19.490128 | orchestrator | 2025-06-02 17:19:19.490461 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-02 17:19:19.491835 | orchestrator | Monday 02 June 2025 17:19:19 +0000 (0:00:00.352) 0:04:12.814 *********** 2025-06-02 17:19:25.338556 | orchestrator | ok: [testbed-manager] 2025-06-02 17:19:25.338643 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:19:25.339186 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:19:25.339379 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:19:25.339860 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:19:25.340253 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:19:25.340558 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:19:25.345182 | orchestrator | 2025-06-02 17:19:25.345325 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-02 17:19:25.345853 | orchestrator | Monday 02 June 2025 17:19:25 +0000 (0:00:05.848) 0:04:18.663 *********** 2025-06-02 17:19:25.760244 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:19:25.760352 | orchestrator | 2025-06-02 17:19:25.761975 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-02 17:19:25.763176 | orchestrator | Monday 02 June 2025 17:19:25 +0000 (0:00:00.423) 0:04:19.087 *********** 2025-06-02 17:19:25.843019 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-02 17:19:25.843117 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-02 17:19:25.901895 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:25.902915 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-02 17:19:25.904524 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-02 17:19:25.905996 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-02 17:19:25.906601 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-02 17:19:25.958414 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:25.959117 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-02 17:19:26.004862 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:26.004948 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-02 17:19:26.056043 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-02 17:19:26.056133 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:26.056287 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-02 17:19:26.056541 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-02 17:19:26.146289 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:26.147685 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-02 17:19:26.148463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:26.152037 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-02 17:19:26.152063 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-02 17:19:26.152075 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:26.152692 | orchestrator | 2025-06-02 17:19:26.153553 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-02 17:19:26.154569 | orchestrator | Monday 02 June 2025 17:19:26 +0000 (0:00:00.385) 0:04:19.472 *********** 2025-06-02 17:19:26.607897 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:19:26.608067 | orchestrator | 2025-06-02 17:19:26.609567 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-02 17:19:26.609608 | orchestrator | Monday 02 June 2025 17:19:26 +0000 (0:00:00.460) 0:04:19.933 *********** 2025-06-02 17:19:26.689819 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-02 17:19:26.689917 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-02 17:19:26.733834 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:19:26.734004 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-02 17:19:26.767348 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:19:26.814966 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-02 17:19:26.818152 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:19:26.818203 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-02 17:19:26.872289 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:19:26.872388 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-02 17:19:26.966179 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:19:26.967091 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:19:26.967365 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-02 17:19:26.968019 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:19:26.968749 | orchestrator | 2025-06-02 17:19:26.969178 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-02 17:19:26.969616 | orchestrator | Monday 02 June 2025 17:19:26 +0000 (0:00:00.360) 0:04:20.293 *********** 2025-06-02 17:19:27.524495 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:19:27.527052 | orchestrator | 2025-06-02 17:19:27.527135 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-02 17:19:27.527254 | orchestrator | Monday 02 June 2025 17:19:27 +0000 (0:00:00.556) 0:04:20.849 *********** 2025-06-02 17:20:02.203322 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:02.207382 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:02.207423 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:02.207436 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:02.209569 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:02.209896 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:02.210477 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:02.210927 | orchestrator | 2025-06-02 17:20:02.211261 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-02 17:20:02.211664 | orchestrator | Monday 02 June 2025 17:20:02 +0000 (0:00:34.677) 0:04:55.527 *********** 2025-06-02 17:20:10.304179 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:10.304793 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:10.305684 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:10.306415 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:10.307765 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:10.308020 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:10.308700 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:10.309322 | orchestrator | 2025-06-02 17:20:10.310421 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-02 17:20:10.311565 | orchestrator | Monday 02 June 2025 17:20:10 +0000 (0:00:08.101) 0:05:03.628 *********** 2025-06-02 17:20:18.178262 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:18.178475 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:18.180044 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:18.180820 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:18.182135 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:18.182845 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:18.183901 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:18.186911 | orchestrator | 2025-06-02 17:20:18.188047 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-02 17:20:18.189545 | orchestrator | Monday 02 June 2025 17:20:18 +0000 (0:00:07.873) 0:05:11.502 *********** 2025-06-02 17:20:19.902612 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:19.903743 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:19.904591 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:19.907322 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:19.908133 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:19.909087 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:19.909447 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:19.910389 | orchestrator | 2025-06-02 17:20:19.911156 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-02 17:20:19.912307 | orchestrator | Monday 02 June 2025 17:20:19 +0000 (0:00:01.726) 0:05:13.228 *********** 2025-06-02 17:20:25.829597 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:25.830984 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:25.832239 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:25.835162 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:25.835322 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:25.836432 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:25.838104 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:25.838141 | orchestrator | 2025-06-02 17:20:25.838619 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-02 17:20:25.839139 | orchestrator | Monday 02 June 2025 17:20:25 +0000 (0:00:05.925) 0:05:19.154 *********** 2025-06-02 17:20:26.253740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:20:26.253916 | orchestrator | 2025-06-02 17:20:26.255797 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-02 17:20:26.258336 | orchestrator | Monday 02 June 2025 17:20:26 +0000 (0:00:00.425) 0:05:19.579 *********** 2025-06-02 17:20:27.027086 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:27.027317 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:27.027895 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:27.031595 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:27.032914 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:27.033919 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:27.035031 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:27.036213 | orchestrator | 2025-06-02 17:20:27.037219 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-02 17:20:27.038204 | orchestrator | Monday 02 June 2025 17:20:27 +0000 (0:00:00.772) 0:05:20.351 *********** 2025-06-02 17:20:28.950345 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:28.950562 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:28.952699 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:28.953933 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:28.955719 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:28.956389 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:28.957482 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:28.957818 | orchestrator | 2025-06-02 17:20:28.958631 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-02 17:20:28.959117 | orchestrator | Monday 02 June 2025 17:20:28 +0000 (0:00:01.921) 0:05:22.273 *********** 2025-06-02 17:20:29.793125 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:29.795612 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:29.795921 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:29.797171 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:29.798338 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:29.798562 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:29.799718 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:29.800144 | orchestrator | 2025-06-02 17:20:29.800598 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-02 17:20:29.801844 | orchestrator | Monday 02 June 2025 17:20:29 +0000 (0:00:00.846) 0:05:23.120 *********** 2025-06-02 17:20:29.927406 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:29.968455 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:30.005933 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:30.042307 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:30.098364 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:30.100150 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:30.101278 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:30.102148 | orchestrator | 2025-06-02 17:20:30.102890 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-02 17:20:30.103737 | orchestrator | Monday 02 June 2025 17:20:30 +0000 (0:00:00.304) 0:05:23.424 *********** 2025-06-02 17:20:30.164740 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:30.197592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:30.231082 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:30.302423 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:30.525623 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:30.526835 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:30.527723 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:30.528968 | orchestrator | 2025-06-02 17:20:30.530108 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-02 17:20:30.531590 | orchestrator | Monday 02 June 2025 17:20:30 +0000 (0:00:00.427) 0:05:23.852 *********** 2025-06-02 17:20:30.644436 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:30.682327 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:30.715890 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:30.776604 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:30.857271 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:30.857988 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:30.858818 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:30.859982 | orchestrator | 2025-06-02 17:20:30.860736 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-02 17:20:30.861332 | orchestrator | Monday 02 June 2025 17:20:30 +0000 (0:00:00.330) 0:05:24.182 *********** 2025-06-02 17:20:30.927132 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:30.965487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:31.000417 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:31.088660 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:31.166294 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:31.166852 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:31.167912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:31.169828 | orchestrator | 2025-06-02 17:20:31.170088 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-02 17:20:31.171486 | orchestrator | Monday 02 June 2025 17:20:31 +0000 (0:00:00.310) 0:05:24.493 *********** 2025-06-02 17:20:31.327563 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:31.383565 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:31.423133 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:31.464924 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:31.560674 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:31.560766 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:31.561024 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:31.561628 | orchestrator | 2025-06-02 17:20:31.562327 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-02 17:20:31.562558 | orchestrator | Monday 02 June 2025 17:20:31 +0000 (0:00:00.392) 0:05:24.886 *********** 2025-06-02 17:20:31.690424 | orchestrator | ok: [testbed-manager] =>  2025-06-02 17:20:31.691110 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.726306 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 17:20:31.727125 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.765978 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 17:20:31.766120 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.815153 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 17:20:31.815361 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.906086 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 17:20:31.907973 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.908073 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 17:20:31.908319 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.910950 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 17:20:31.913316 | orchestrator |  docker_version: 5:27.5.1 2025-06-02 17:20:31.913577 | orchestrator | 2025-06-02 17:20:31.914297 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-02 17:20:31.914778 | orchestrator | Monday 02 June 2025 17:20:31 +0000 (0:00:00.345) 0:05:25.232 *********** 2025-06-02 17:20:32.176028 | orchestrator | ok: [testbed-manager] =>  2025-06-02 17:20:32.177080 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.212092 | orchestrator | ok: [testbed-node-0] =>  2025-06-02 17:20:32.212167 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.251150 | orchestrator | ok: [testbed-node-1] =>  2025-06-02 17:20:32.251315 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.289602 | orchestrator | ok: [testbed-node-2] =>  2025-06-02 17:20:32.291745 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.375914 | orchestrator | ok: [testbed-node-3] =>  2025-06-02 17:20:32.376529 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.377303 | orchestrator | ok: [testbed-node-4] =>  2025-06-02 17:20:32.378876 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.379520 | orchestrator | ok: [testbed-node-5] =>  2025-06-02 17:20:32.380144 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-02 17:20:32.381434 | orchestrator | 2025-06-02 17:20:32.382202 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-02 17:20:32.382995 | orchestrator | Monday 02 June 2025 17:20:32 +0000 (0:00:00.470) 0:05:25.703 *********** 2025-06-02 17:20:32.475444 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:32.518519 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:32.566479 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:32.598388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:32.631905 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:32.699982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:32.700306 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:32.701551 | orchestrator | 2025-06-02 17:20:32.701908 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-02 17:20:32.702667 | orchestrator | Monday 02 June 2025 17:20:32 +0000 (0:00:00.324) 0:05:26.027 *********** 2025-06-02 17:20:32.792955 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:32.828731 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:32.864687 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:32.898629 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:32.937517 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:32.996241 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:32.998002 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:33.001581 | orchestrator | 2025-06-02 17:20:33.001606 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-02 17:20:33.003306 | orchestrator | Monday 02 June 2025 17:20:32 +0000 (0:00:00.295) 0:05:26.323 *********** 2025-06-02 17:20:33.447019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:20:33.449097 | orchestrator | 2025-06-02 17:20:33.450900 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-02 17:20:33.452033 | orchestrator | Monday 02 June 2025 17:20:33 +0000 (0:00:00.449) 0:05:26.773 *********** 2025-06-02 17:20:34.297594 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:34.298242 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:34.299751 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:34.300683 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:34.301658 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:34.302465 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:34.303109 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:34.303962 | orchestrator | 2025-06-02 17:20:34.304736 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-02 17:20:34.305144 | orchestrator | Monday 02 June 2025 17:20:34 +0000 (0:00:00.848) 0:05:27.622 *********** 2025-06-02 17:20:37.079763 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:20:37.079945 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:20:37.080985 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:20:37.081794 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:20:37.086122 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:20:37.086639 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:37.087260 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:20:37.087990 | orchestrator | 2025-06-02 17:20:37.088841 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-02 17:20:37.089403 | orchestrator | Monday 02 June 2025 17:20:37 +0000 (0:00:02.783) 0:05:30.405 *********** 2025-06-02 17:20:37.158733 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-02 17:20:37.250929 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-02 17:20:37.251054 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-02 17:20:37.251892 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-02 17:20:37.252790 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-02 17:20:37.256410 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-02 17:20:37.329589 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:20:37.329676 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-02 17:20:37.329686 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-02 17:20:37.332184 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-02 17:20:37.571988 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:20:37.573140 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-02 17:20:37.573543 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-02 17:20:37.574627 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-02 17:20:37.647282 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:20:37.647893 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-02 17:20:37.649022 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-02 17:20:37.649391 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-02 17:20:37.744906 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:20:37.745417 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-02 17:20:37.745486 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-02 17:20:37.745903 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-02 17:20:37.915614 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:20:37.916357 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:20:37.917906 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-02 17:20:37.918591 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-02 17:20:37.919859 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-02 17:20:37.922969 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:20:37.923020 | orchestrator | 2025-06-02 17:20:37.923036 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-02 17:20:37.926106 | orchestrator | Monday 02 June 2025 17:20:37 +0000 (0:00:00.834) 0:05:31.240 *********** 2025-06-02 17:20:44.403596 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:44.403712 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:44.405927 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:44.407426 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:44.408779 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:44.409602 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:44.409998 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:44.410775 | orchestrator | 2025-06-02 17:20:44.411120 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-02 17:20:44.411955 | orchestrator | Monday 02 June 2025 17:20:44 +0000 (0:00:06.486) 0:05:37.726 *********** 2025-06-02 17:20:45.478541 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:45.480889 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:45.480963 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:45.482279 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:45.483709 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:45.485259 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:45.486885 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:45.487214 | orchestrator | 2025-06-02 17:20:45.488483 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-02 17:20:45.491534 | orchestrator | Monday 02 June 2025 17:20:45 +0000 (0:00:01.075) 0:05:38.802 *********** 2025-06-02 17:20:53.471605 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:53.474227 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:53.477233 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:53.477276 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:53.477757 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:53.478623 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:53.479415 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:53.480455 | orchestrator | 2025-06-02 17:20:53.480847 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-02 17:20:53.481364 | orchestrator | Monday 02 June 2025 17:20:53 +0000 (0:00:07.992) 0:05:46.795 *********** 2025-06-02 17:20:57.022908 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:57.025142 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:57.026133 | orchestrator | changed: [testbed-manager] 2025-06-02 17:20:57.028452 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:57.028483 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:57.029263 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:57.030415 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:57.030909 | orchestrator | 2025-06-02 17:20:57.031893 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-02 17:20:57.033191 | orchestrator | Monday 02 June 2025 17:20:57 +0000 (0:00:03.550) 0:05:50.345 *********** 2025-06-02 17:20:58.575295 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:58.576940 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:58.577916 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:58.578768 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:58.580194 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:58.581444 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:58.582827 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:58.584047 | orchestrator | 2025-06-02 17:20:58.585482 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-02 17:20:58.586413 | orchestrator | Monday 02 June 2025 17:20:58 +0000 (0:00:01.554) 0:05:51.900 *********** 2025-06-02 17:20:59.893746 | orchestrator | ok: [testbed-manager] 2025-06-02 17:20:59.893852 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:20:59.894400 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:20:59.895376 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:20:59.896422 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:20:59.896938 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:20:59.897632 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:20:59.897965 | orchestrator | 2025-06-02 17:20:59.900993 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-02 17:20:59.901023 | orchestrator | Monday 02 June 2025 17:20:59 +0000 (0:00:01.319) 0:05:53.219 *********** 2025-06-02 17:21:00.111897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:00.175347 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:00.243190 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:00.316042 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:00.485582 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:00.486627 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:00.488386 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:00.489241 | orchestrator | 2025-06-02 17:21:00.489845 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-02 17:21:00.490662 | orchestrator | Monday 02 June 2025 17:21:00 +0000 (0:00:00.591) 0:05:53.811 *********** 2025-06-02 17:21:10.207105 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:10.209385 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:10.209535 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:10.210350 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:10.212336 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:10.212983 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:10.213898 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:10.214369 | orchestrator | 2025-06-02 17:21:10.215255 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-02 17:21:10.215728 | orchestrator | Monday 02 June 2025 17:21:10 +0000 (0:00:09.721) 0:06:03.532 *********** 2025-06-02 17:21:11.118460 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:11.119047 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:11.119288 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:11.120476 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:11.121278 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:11.122240 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:11.122698 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:11.123512 | orchestrator | 2025-06-02 17:21:11.124349 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-02 17:21:11.124422 | orchestrator | Monday 02 June 2025 17:21:11 +0000 (0:00:00.911) 0:06:04.444 *********** 2025-06-02 17:21:20.035633 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:20.036289 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:20.037539 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:20.039987 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:20.040950 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:20.041437 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:20.042784 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:20.044941 | orchestrator | 2025-06-02 17:21:20.045899 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-02 17:21:20.046428 | orchestrator | Monday 02 June 2025 17:21:20 +0000 (0:00:08.916) 0:06:13.361 *********** 2025-06-02 17:21:31.115586 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:31.115688 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:31.115699 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:31.116001 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:31.116984 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:31.117630 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:31.118138 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:31.118863 | orchestrator | 2025-06-02 17:21:31.120756 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-02 17:21:31.121581 | orchestrator | Monday 02 June 2025 17:21:31 +0000 (0:00:11.076) 0:06:24.437 *********** 2025-06-02 17:21:31.569681 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-02 17:21:31.569872 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-02 17:21:31.716050 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-02 17:21:32.480930 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-02 17:21:32.481039 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-02 17:21:32.481055 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-02 17:21:32.482637 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-02 17:21:32.482860 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-02 17:21:32.483816 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-02 17:21:32.483859 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-02 17:21:32.484237 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-02 17:21:32.484537 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-02 17:21:32.484634 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-02 17:21:32.484969 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-02 17:21:32.485144 | orchestrator | 2025-06-02 17:21:32.485497 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-02 17:21:32.485825 | orchestrator | Monday 02 June 2025 17:21:32 +0000 (0:00:01.367) 0:06:25.805 *********** 2025-06-02 17:21:32.619756 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:32.692433 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:32.778094 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:32.849791 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:32.915557 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:33.047969 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:33.050443 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:33.050879 | orchestrator | 2025-06-02 17:21:33.051548 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-02 17:21:33.052599 | orchestrator | Monday 02 June 2025 17:21:33 +0000 (0:00:00.564) 0:06:26.369 *********** 2025-06-02 17:21:36.776052 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:36.776222 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:36.778627 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:36.778684 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:36.779416 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:36.779679 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:36.780350 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:36.782083 | orchestrator | 2025-06-02 17:21:36.782677 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-02 17:21:36.783331 | orchestrator | Monday 02 June 2025 17:21:36 +0000 (0:00:03.730) 0:06:30.099 *********** 2025-06-02 17:21:36.927343 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:36.995843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:37.062175 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:37.132987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:37.200265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:37.306572 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:37.306773 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:37.307762 | orchestrator | 2025-06-02 17:21:37.309376 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-02 17:21:37.312657 | orchestrator | Monday 02 June 2025 17:21:37 +0000 (0:00:00.531) 0:06:30.631 *********** 2025-06-02 17:21:37.383836 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-02 17:21:37.384253 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-02 17:21:37.455284 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:37.456096 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-02 17:21:37.457152 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-02 17:21:37.525695 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:37.526580 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-02 17:21:37.527310 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-02 17:21:37.608372 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:37.608577 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-02 17:21:37.609554 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-02 17:21:37.690500 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:37.690697 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-02 17:21:37.690717 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-02 17:21:37.762693 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:37.764583 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-02 17:21:37.768037 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-02 17:21:37.881785 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:37.882655 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-02 17:21:37.883407 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-02 17:21:37.884582 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:37.885942 | orchestrator | 2025-06-02 17:21:37.887307 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-02 17:21:37.888415 | orchestrator | Monday 02 June 2025 17:21:37 +0000 (0:00:00.575) 0:06:31.207 *********** 2025-06-02 17:21:38.028450 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:38.103344 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:38.169179 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:38.234138 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:38.304801 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:38.411479 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:38.412567 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:38.413579 | orchestrator | 2025-06-02 17:21:38.418205 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-02 17:21:38.418277 | orchestrator | Monday 02 June 2025 17:21:38 +0000 (0:00:00.528) 0:06:31.736 *********** 2025-06-02 17:21:38.549566 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:38.615942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:38.683868 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:38.762394 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:38.827538 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:38.941679 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:38.943081 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:38.946077 | orchestrator | 2025-06-02 17:21:38.946103 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-02 17:21:38.946144 | orchestrator | Monday 02 June 2025 17:21:38 +0000 (0:00:00.530) 0:06:32.266 *********** 2025-06-02 17:21:39.155492 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:39.236680 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:21:39.490795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:21:39.563833 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:21:39.640083 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:21:39.782814 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:21:39.784764 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:21:39.785717 | orchestrator | 2025-06-02 17:21:39.786528 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-02 17:21:39.787395 | orchestrator | Monday 02 June 2025 17:21:39 +0000 (0:00:00.841) 0:06:33.108 *********** 2025-06-02 17:21:41.473583 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:41.474545 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:41.475491 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:41.476596 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:41.478185 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:41.480291 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:41.481266 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:41.482516 | orchestrator | 2025-06-02 17:21:41.483271 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-02 17:21:41.483879 | orchestrator | Monday 02 June 2025 17:21:41 +0000 (0:00:01.687) 0:06:34.795 *********** 2025-06-02 17:21:42.366860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:21:42.367477 | orchestrator | 2025-06-02 17:21:42.368611 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-02 17:21:42.369328 | orchestrator | Monday 02 June 2025 17:21:42 +0000 (0:00:00.896) 0:06:35.692 *********** 2025-06-02 17:21:43.189086 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:43.190005 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:43.190740 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:43.191222 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:43.191735 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:43.192818 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:43.193538 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:43.194181 | orchestrator | 2025-06-02 17:21:43.195023 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-02 17:21:43.196032 | orchestrator | Monday 02 June 2025 17:21:43 +0000 (0:00:00.821) 0:06:36.513 *********** 2025-06-02 17:21:43.664384 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:43.731015 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:44.322323 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:44.323421 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:44.324345 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:44.326399 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:44.327421 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:44.328723 | orchestrator | 2025-06-02 17:21:44.329810 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-02 17:21:44.330464 | orchestrator | Monday 02 June 2025 17:21:44 +0000 (0:00:01.134) 0:06:37.647 *********** 2025-06-02 17:21:45.682265 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:45.683380 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:45.686272 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:45.687385 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:45.687897 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:45.688627 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:45.689680 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:45.690219 | orchestrator | 2025-06-02 17:21:45.690956 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-02 17:21:45.691259 | orchestrator | Monday 02 June 2025 17:21:45 +0000 (0:00:01.359) 0:06:39.007 *********** 2025-06-02 17:21:45.814394 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:21:47.107542 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:47.111886 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:47.111927 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:47.111938 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:47.111947 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:47.113039 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:47.114055 | orchestrator | 2025-06-02 17:21:47.115119 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-02 17:21:47.115849 | orchestrator | Monday 02 June 2025 17:21:47 +0000 (0:00:01.422) 0:06:40.430 *********** 2025-06-02 17:21:48.432543 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:48.433701 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:48.434741 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:48.435491 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:48.438765 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:48.438865 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:48.440307 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:48.441390 | orchestrator | 2025-06-02 17:21:48.442540 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-02 17:21:48.443525 | orchestrator | Monday 02 June 2025 17:21:48 +0000 (0:00:01.326) 0:06:41.756 *********** 2025-06-02 17:21:50.064779 | orchestrator | changed: [testbed-manager] 2025-06-02 17:21:50.065739 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:21:50.072023 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:21:50.072949 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:21:50.073721 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:21:50.074845 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:21:50.075780 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:21:50.077059 | orchestrator | 2025-06-02 17:21:50.077782 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-02 17:21:50.078789 | orchestrator | Monday 02 June 2025 17:21:50 +0000 (0:00:01.631) 0:06:43.388 *********** 2025-06-02 17:21:50.970078 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:21:50.971511 | orchestrator | 2025-06-02 17:21:50.972449 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-02 17:21:50.974353 | orchestrator | Monday 02 June 2025 17:21:50 +0000 (0:00:00.907) 0:06:44.295 *********** 2025-06-02 17:21:52.468282 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:52.469074 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:52.469988 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:52.470936 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:52.473790 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:52.477682 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:52.477729 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:52.477742 | orchestrator | 2025-06-02 17:21:52.478368 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-02 17:21:52.479607 | orchestrator | Monday 02 June 2025 17:21:52 +0000 (0:00:01.496) 0:06:45.792 *********** 2025-06-02 17:21:53.719594 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:53.720576 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:53.723872 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:53.723925 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:53.725384 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:53.729221 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:53.729280 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:53.729293 | orchestrator | 2025-06-02 17:21:53.729934 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-02 17:21:53.731004 | orchestrator | Monday 02 June 2025 17:21:53 +0000 (0:00:01.249) 0:06:47.042 *********** 2025-06-02 17:21:55.125687 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:55.126755 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:55.127314 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:55.130966 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:55.132156 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:55.134320 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:55.134993 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:55.136277 | orchestrator | 2025-06-02 17:21:55.136872 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-02 17:21:55.138252 | orchestrator | Monday 02 June 2025 17:21:55 +0000 (0:00:01.407) 0:06:48.449 *********** 2025-06-02 17:21:56.293191 | orchestrator | ok: [testbed-manager] 2025-06-02 17:21:56.294699 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:56.295446 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:56.295646 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:56.296625 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:21:56.297603 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:21:56.298001 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:21:56.298641 | orchestrator | 2025-06-02 17:21:56.299239 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-02 17:21:56.300786 | orchestrator | Monday 02 June 2025 17:21:56 +0000 (0:00:01.167) 0:06:49.617 *********** 2025-06-02 17:21:57.644066 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:21:57.644799 | orchestrator | 2025-06-02 17:21:57.646216 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.648667 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:01.034) 0:06:50.651 *********** 2025-06-02 17:21:57.649732 | orchestrator | 2025-06-02 17:21:57.650765 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.651787 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.041) 0:06:50.693 *********** 2025-06-02 17:21:57.652950 | orchestrator | 2025-06-02 17:21:57.653873 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.655045 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.047) 0:06:50.740 *********** 2025-06-02 17:21:57.655864 | orchestrator | 2025-06-02 17:21:57.657055 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.657778 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.041) 0:06:50.781 *********** 2025-06-02 17:21:57.658422 | orchestrator | 2025-06-02 17:21:57.659055 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.660178 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.039) 0:06:50.821 *********** 2025-06-02 17:21:57.660637 | orchestrator | 2025-06-02 17:21:57.661735 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.663051 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.064) 0:06:50.885 *********** 2025-06-02 17:21:57.663931 | orchestrator | 2025-06-02 17:21:57.665330 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-02 17:21:57.666001 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.040) 0:06:50.926 *********** 2025-06-02 17:21:57.666530 | orchestrator | 2025-06-02 17:21:57.667548 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-02 17:21:57.668537 | orchestrator | Monday 02 June 2025 17:21:57 +0000 (0:00:00.041) 0:06:50.967 *********** 2025-06-02 17:21:58.909885 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:21:58.909993 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:21:58.911264 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:21:58.912227 | orchestrator | 2025-06-02 17:21:58.913394 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-02 17:21:58.916791 | orchestrator | Monday 02 June 2025 17:21:58 +0000 (0:00:01.265) 0:06:52.232 *********** 2025-06-02 17:22:00.513249 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:00.513531 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:00.514351 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:00.515270 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:00.515722 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:00.516756 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:00.517496 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:00.517837 | orchestrator | 2025-06-02 17:22:00.518675 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-02 17:22:00.519286 | orchestrator | Monday 02 June 2025 17:22:00 +0000 (0:00:01.606) 0:06:53.839 *********** 2025-06-02 17:22:01.698627 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:01.699469 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:01.700296 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:01.700575 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:01.701026 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:01.701248 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:01.703690 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:01.704482 | orchestrator | 2025-06-02 17:22:01.705130 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-02 17:22:01.705994 | orchestrator | Monday 02 June 2025 17:22:01 +0000 (0:00:01.183) 0:06:55.022 *********** 2025-06-02 17:22:01.851806 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:04.288790 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:04.289024 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:04.289843 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:04.290120 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:04.290153 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:04.292481 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:04.292966 | orchestrator | 2025-06-02 17:22:04.294174 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-02 17:22:04.295170 | orchestrator | Monday 02 June 2025 17:22:04 +0000 (0:00:02.592) 0:06:57.615 *********** 2025-06-02 17:22:04.386976 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:04.387461 | orchestrator | 2025-06-02 17:22:04.389130 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-02 17:22:04.389717 | orchestrator | Monday 02 June 2025 17:22:04 +0000 (0:00:00.095) 0:06:57.711 *********** 2025-06-02 17:22:05.438885 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:05.440354 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:05.441516 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:05.442641 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:05.443775 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:05.445158 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:05.446243 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:05.447047 | orchestrator | 2025-06-02 17:22:05.447752 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-02 17:22:05.448241 | orchestrator | Monday 02 June 2025 17:22:05 +0000 (0:00:01.051) 0:06:58.763 *********** 2025-06-02 17:22:05.773369 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:05.848518 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:05.928928 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:05.996547 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:06.067254 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:06.197017 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:06.198603 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:06.202138 | orchestrator | 2025-06-02 17:22:06.202167 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-02 17:22:06.203102 | orchestrator | Monday 02 June 2025 17:22:06 +0000 (0:00:00.760) 0:06:59.523 *********** 2025-06-02 17:22:07.161724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:07.164114 | orchestrator | 2025-06-02 17:22:07.164189 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-02 17:22:07.164834 | orchestrator | Monday 02 June 2025 17:22:07 +0000 (0:00:00.960) 0:07:00.484 *********** 2025-06-02 17:22:07.573967 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:07.993338 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:07.993772 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:07.994628 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:07.995477 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:07.996319 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:07.997924 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:07.998774 | orchestrator | 2025-06-02 17:22:07.999757 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-02 17:22:08.000353 | orchestrator | Monday 02 June 2025 17:22:07 +0000 (0:00:00.834) 0:07:01.318 *********** 2025-06-02 17:22:10.879066 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-02 17:22:10.879341 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-02 17:22:10.880848 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-02 17:22:10.881849 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-02 17:22:10.882641 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-02 17:22:10.883657 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-02 17:22:10.887372 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-02 17:22:10.887949 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-02 17:22:10.888907 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-02 17:22:10.889789 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-02 17:22:10.890655 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-02 17:22:10.891566 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-02 17:22:10.892565 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-02 17:22:10.893186 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-02 17:22:10.893890 | orchestrator | 2025-06-02 17:22:10.894733 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-02 17:22:10.895286 | orchestrator | Monday 02 June 2025 17:22:10 +0000 (0:00:02.885) 0:07:04.204 *********** 2025-06-02 17:22:11.024395 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:11.088753 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:11.179060 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:11.243630 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:11.310341 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:11.423398 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:11.423494 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:11.425025 | orchestrator | 2025-06-02 17:22:11.426436 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-02 17:22:11.427903 | orchestrator | Monday 02 June 2025 17:22:11 +0000 (0:00:00.546) 0:07:04.750 *********** 2025-06-02 17:22:12.241313 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:12.241879 | orchestrator | 2025-06-02 17:22:12.243361 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-02 17:22:12.244277 | orchestrator | Monday 02 June 2025 17:22:12 +0000 (0:00:00.814) 0:07:05.565 *********** 2025-06-02 17:22:12.843626 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:12.914809 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:13.373810 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:13.374611 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:13.375857 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:13.379280 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:13.379346 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:13.379367 | orchestrator | 2025-06-02 17:22:13.379388 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-02 17:22:13.379945 | orchestrator | Monday 02 June 2025 17:22:13 +0000 (0:00:01.134) 0:07:06.699 *********** 2025-06-02 17:22:13.767560 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:14.260332 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:14.260969 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:14.261679 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:14.262363 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:14.263773 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:14.264498 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:14.265025 | orchestrator | 2025-06-02 17:22:14.265811 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-02 17:22:14.266113 | orchestrator | Monday 02 June 2025 17:22:14 +0000 (0:00:00.883) 0:07:07.583 *********** 2025-06-02 17:22:14.399361 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:14.467102 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:14.546602 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:14.634692 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:14.713533 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:14.822315 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:14.822413 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:14.823215 | orchestrator | 2025-06-02 17:22:14.824712 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-02 17:22:14.825805 | orchestrator | Monday 02 June 2025 17:22:14 +0000 (0:00:00.562) 0:07:08.145 *********** 2025-06-02 17:22:16.260261 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:16.260374 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:16.260459 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:16.260474 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:16.263304 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:16.263331 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:16.263344 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:16.263356 | orchestrator | 2025-06-02 17:22:16.263370 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-02 17:22:16.264688 | orchestrator | Monday 02 June 2025 17:22:16 +0000 (0:00:01.439) 0:07:09.585 *********** 2025-06-02 17:22:16.394704 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:16.472315 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:16.540063 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:16.604024 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:16.672867 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:16.760580 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:16.761401 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:16.762641 | orchestrator | 2025-06-02 17:22:16.763556 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-02 17:22:16.764296 | orchestrator | Monday 02 June 2025 17:22:16 +0000 (0:00:00.499) 0:07:10.085 *********** 2025-06-02 17:22:24.652562 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:24.652664 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:24.654774 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:24.657784 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:24.657804 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:24.657813 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:24.658174 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:24.658813 | orchestrator | 2025-06-02 17:22:24.659420 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-02 17:22:24.659751 | orchestrator | Monday 02 June 2025 17:22:24 +0000 (0:00:07.886) 0:07:17.972 *********** 2025-06-02 17:22:26.080414 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:26.080543 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:26.081163 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:26.081555 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:26.082559 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:26.082853 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:26.083404 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:26.083923 | orchestrator | 2025-06-02 17:22:26.084595 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-02 17:22:26.084969 | orchestrator | Monday 02 June 2025 17:22:26 +0000 (0:00:01.434) 0:07:19.407 *********** 2025-06-02 17:22:27.876780 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:27.877692 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:27.879549 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:27.880290 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:27.881636 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:27.882347 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:27.882610 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:27.883142 | orchestrator | 2025-06-02 17:22:27.883490 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-02 17:22:27.883818 | orchestrator | Monday 02 June 2025 17:22:27 +0000 (0:00:01.793) 0:07:21.200 *********** 2025-06-02 17:22:29.767021 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:29.767223 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:29.767241 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:29.767310 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:29.769168 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:29.769444 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:29.770713 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:29.771836 | orchestrator | 2025-06-02 17:22:29.772224 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:22:29.773481 | orchestrator | Monday 02 June 2025 17:22:29 +0000 (0:00:01.890) 0:07:23.091 *********** 2025-06-02 17:22:30.221745 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:30.647221 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:30.648453 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:30.649534 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:30.650375 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:30.652463 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:30.653212 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:30.653936 | orchestrator | 2025-06-02 17:22:30.654836 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:22:30.655549 | orchestrator | Monday 02 June 2025 17:22:30 +0000 (0:00:00.881) 0:07:23.972 *********** 2025-06-02 17:22:30.803691 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:30.871287 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:30.938184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:31.015011 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:31.092276 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:31.557884 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:31.559637 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:31.560394 | orchestrator | 2025-06-02 17:22:31.561616 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-02 17:22:31.564996 | orchestrator | Monday 02 June 2025 17:22:31 +0000 (0:00:00.911) 0:07:24.884 *********** 2025-06-02 17:22:31.712761 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:31.803457 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:31.870516 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:31.934567 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:32.013489 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:32.123349 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:32.123958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:32.126277 | orchestrator | 2025-06-02 17:22:32.127480 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-02 17:22:32.128232 | orchestrator | Monday 02 June 2025 17:22:32 +0000 (0:00:00.564) 0:07:25.448 *********** 2025-06-02 17:22:32.290166 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:32.360795 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:32.430283 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:32.505930 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:32.792535 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:32.893437 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:32.893546 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:32.894346 | orchestrator | 2025-06-02 17:22:32.896543 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-02 17:22:32.898107 | orchestrator | Monday 02 June 2025 17:22:32 +0000 (0:00:00.769) 0:07:26.218 *********** 2025-06-02 17:22:33.056310 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:33.122589 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:33.206110 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:33.292951 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:33.359981 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:33.481880 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:33.482278 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:33.483075 | orchestrator | 2025-06-02 17:22:33.484483 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-02 17:22:33.485270 | orchestrator | Monday 02 June 2025 17:22:33 +0000 (0:00:00.589) 0:07:26.807 *********** 2025-06-02 17:22:33.637159 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:33.717772 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:33.789128 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:33.852811 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:33.923954 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:34.050345 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:34.050840 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:34.052434 | orchestrator | 2025-06-02 17:22:34.053449 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-02 17:22:34.054508 | orchestrator | Monday 02 June 2025 17:22:34 +0000 (0:00:00.568) 0:07:27.376 *********** 2025-06-02 17:22:39.798522 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:39.799612 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:39.800758 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:39.801144 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:39.801509 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:39.802318 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:39.802393 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:39.802602 | orchestrator | 2025-06-02 17:22:39.803114 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-02 17:22:39.803373 | orchestrator | Monday 02 June 2025 17:22:39 +0000 (0:00:05.747) 0:07:33.123 *********** 2025-06-02 17:22:39.959111 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:22:40.028153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:22:40.097184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:22:40.174756 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:22:40.240824 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:22:40.369499 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:22:40.371094 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:22:40.372310 | orchestrator | 2025-06-02 17:22:40.372925 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-02 17:22:40.374128 | orchestrator | Monday 02 June 2025 17:22:40 +0000 (0:00:00.571) 0:07:33.694 *********** 2025-06-02 17:22:41.442284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:41.442496 | orchestrator | 2025-06-02 17:22:41.442599 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-02 17:22:41.444928 | orchestrator | Monday 02 June 2025 17:22:41 +0000 (0:00:01.071) 0:07:34.765 *********** 2025-06-02 17:22:43.260898 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:43.261458 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:43.262577 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:43.264530 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:43.265829 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:43.266160 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:43.267261 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:43.267691 | orchestrator | 2025-06-02 17:22:43.268267 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-02 17:22:43.269597 | orchestrator | Monday 02 June 2025 17:22:43 +0000 (0:00:01.817) 0:07:36.583 *********** 2025-06-02 17:22:44.434570 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:44.434686 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:44.435638 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:44.437311 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:44.438089 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:44.439358 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:44.440448 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:44.441668 | orchestrator | 2025-06-02 17:22:44.442708 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-02 17:22:44.443616 | orchestrator | Monday 02 June 2025 17:22:44 +0000 (0:00:01.176) 0:07:37.759 *********** 2025-06-02 17:22:45.107445 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:45.549217 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:45.549885 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:45.550954 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:45.551723 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:45.553504 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:45.553528 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:45.554238 | orchestrator | 2025-06-02 17:22:45.555251 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-02 17:22:45.555768 | orchestrator | Monday 02 June 2025 17:22:45 +0000 (0:00:01.113) 0:07:38.873 *********** 2025-06-02 17:22:47.279889 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.279981 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.280386 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.281384 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.282161 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.283968 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.284169 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-02 17:22:47.284194 | orchestrator | 2025-06-02 17:22:47.286841 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-02 17:22:47.286889 | orchestrator | Monday 02 June 2025 17:22:47 +0000 (0:00:01.731) 0:07:40.604 *********** 2025-06-02 17:22:48.097853 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:22:48.098153 | orchestrator | 2025-06-02 17:22:48.099005 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-02 17:22:48.102796 | orchestrator | Monday 02 June 2025 17:22:48 +0000 (0:00:00.817) 0:07:41.421 *********** 2025-06-02 17:22:57.311637 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:22:57.311857 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:22:57.312234 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:22:57.314164 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:22:57.314519 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:22:57.315562 | orchestrator | changed: [testbed-manager] 2025-06-02 17:22:57.316408 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:22:57.317907 | orchestrator | 2025-06-02 17:22:57.318954 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-02 17:22:57.320399 | orchestrator | Monday 02 June 2025 17:22:57 +0000 (0:00:09.213) 0:07:50.635 *********** 2025-06-02 17:22:59.184006 | orchestrator | ok: [testbed-manager] 2025-06-02 17:22:59.185722 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:22:59.188971 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:22:59.189008 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:22:59.189049 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:22:59.190627 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:22:59.191212 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:22:59.191956 | orchestrator | 2025-06-02 17:22:59.192724 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-02 17:22:59.193195 | orchestrator | Monday 02 June 2025 17:22:59 +0000 (0:00:01.872) 0:07:52.507 *********** 2025-06-02 17:23:00.546430 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:00.546544 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:00.546973 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:00.547748 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:00.548007 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:00.548488 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:00.550834 | orchestrator | 2025-06-02 17:23:00.550871 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-02 17:23:00.550923 | orchestrator | Monday 02 June 2025 17:23:00 +0000 (0:00:01.362) 0:07:53.870 *********** 2025-06-02 17:23:02.051960 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:02.052135 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:02.052528 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:02.055815 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:02.055854 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:02.055866 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:02.056839 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:02.056998 | orchestrator | 2025-06-02 17:23:02.057412 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-02 17:23:02.058143 | orchestrator | 2025-06-02 17:23:02.058691 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-02 17:23:02.059491 | orchestrator | Monday 02 June 2025 17:23:02 +0000 (0:00:01.507) 0:07:55.377 *********** 2025-06-02 17:23:02.184804 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:02.247155 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:02.321996 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:02.410922 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:02.480678 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:02.610958 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:02.612371 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:02.615858 | orchestrator | 2025-06-02 17:23:02.615904 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-02 17:23:02.617330 | orchestrator | 2025-06-02 17:23:02.618705 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-02 17:23:02.619143 | orchestrator | Monday 02 June 2025 17:23:02 +0000 (0:00:00.560) 0:07:55.938 *********** 2025-06-02 17:23:04.024188 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:04.024374 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:04.025903 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:04.027402 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:04.029346 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:04.030119 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:04.031624 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:04.033191 | orchestrator | 2025-06-02 17:23:04.034418 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-02 17:23:04.034946 | orchestrator | Monday 02 June 2025 17:23:04 +0000 (0:00:01.409) 0:07:57.347 *********** 2025-06-02 17:23:05.678677 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:05.679537 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:05.680348 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:05.685210 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:05.686246 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:05.687319 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:05.688109 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:05.689152 | orchestrator | 2025-06-02 17:23:05.690330 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-02 17:23:05.690844 | orchestrator | Monday 02 June 2025 17:23:05 +0000 (0:00:01.655) 0:07:59.003 *********** 2025-06-02 17:23:05.807156 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:05.881670 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:05.947857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:06.009865 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:06.095157 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:06.525168 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:06.525391 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:06.525690 | orchestrator | 2025-06-02 17:23:06.526893 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-02 17:23:06.527778 | orchestrator | Monday 02 June 2025 17:23:06 +0000 (0:00:00.847) 0:07:59.851 *********** 2025-06-02 17:23:07.768089 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:07.769591 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:07.770277 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:07.772306 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:07.774334 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:07.775499 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:07.776266 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:07.777230 | orchestrator | 2025-06-02 17:23:07.778834 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-02 17:23:07.781265 | orchestrator | 2025-06-02 17:23:07.781945 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-02 17:23:07.783643 | orchestrator | Monday 02 June 2025 17:23:07 +0000 (0:00:01.242) 0:08:01.093 *********** 2025-06-02 17:23:08.746454 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:23:08.747791 | orchestrator | 2025-06-02 17:23:08.749436 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 17:23:08.750583 | orchestrator | Monday 02 June 2025 17:23:08 +0000 (0:00:00.979) 0:08:02.072 *********** 2025-06-02 17:23:09.167769 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:09.594577 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:09.594678 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:09.595408 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:09.596642 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:09.597562 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:09.598590 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:09.599269 | orchestrator | 2025-06-02 17:23:09.599909 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 17:23:09.600585 | orchestrator | Monday 02 June 2025 17:23:09 +0000 (0:00:00.849) 0:08:02.922 *********** 2025-06-02 17:23:10.762799 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:10.763909 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:10.764997 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:10.765975 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:10.766886 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:10.768180 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:10.768204 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:10.769091 | orchestrator | 2025-06-02 17:23:10.770003 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-02 17:23:10.770522 | orchestrator | Monday 02 June 2025 17:23:10 +0000 (0:00:01.165) 0:08:04.087 *********** 2025-06-02 17:23:11.808574 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:23:11.808742 | orchestrator | 2025-06-02 17:23:11.810474 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-02 17:23:11.811493 | orchestrator | Monday 02 June 2025 17:23:11 +0000 (0:00:01.046) 0:08:05.133 *********** 2025-06-02 17:23:12.637092 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:12.637205 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:12.637633 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:12.638368 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:12.639173 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:12.642174 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:12.643351 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:12.644571 | orchestrator | 2025-06-02 17:23:12.645517 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-02 17:23:12.646627 | orchestrator | Monday 02 June 2025 17:23:12 +0000 (0:00:00.826) 0:08:05.960 *********** 2025-06-02 17:23:13.075412 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:13.755854 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:13.756225 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:13.758079 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:13.758715 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:13.759182 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:13.760232 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:13.761301 | orchestrator | 2025-06-02 17:23:13.761841 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:23:13.762211 | orchestrator | 2025-06-02 17:23:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:23:13.762532 | orchestrator | 2025-06-02 17:23:13 | INFO  | Please wait and do not abort execution. 2025-06-02 17:23:13.763127 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-02 17:23:13.763860 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-02 17:23:13.764271 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:23:13.765098 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:23:13.765408 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:23:13.766395 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:23:13.766834 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-02 17:23:13.767737 | orchestrator | 2025-06-02 17:23:13.768365 | orchestrator | 2025-06-02 17:23:13.769109 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:23:13.769500 | orchestrator | Monday 02 June 2025 17:23:13 +0000 (0:00:01.123) 0:08:07.083 *********** 2025-06-02 17:23:13.770266 | orchestrator | =============================================================================== 2025-06-02 17:23:13.770999 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.09s 2025-06-02 17:23:13.771545 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.30s 2025-06-02 17:23:13.771853 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.68s 2025-06-02 17:23:13.772928 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.63s 2025-06-02 17:23:13.774115 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.59s 2025-06-02 17:23:13.774267 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.07s 2025-06-02 17:23:13.775065 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.08s 2025-06-02 17:23:13.775908 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.72s 2025-06-02 17:23:13.776350 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.21s 2025-06-02 17:23:13.777526 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.92s 2025-06-02 17:23:13.777849 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.83s 2025-06-02 17:23:13.778754 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.48s 2025-06-02 17:23:13.779432 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.10s 2025-06-02 17:23:13.780530 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.99s 2025-06-02 17:23:13.780869 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.89s 2025-06-02 17:23:13.781661 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.87s 2025-06-02 17:23:13.782566 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.49s 2025-06-02 17:23:13.783772 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.93s 2025-06-02 17:23:13.783955 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.85s 2025-06-02 17:23:13.784573 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2025-06-02 17:23:14.629533 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 17:23:14.629630 | orchestrator | + osism apply network 2025-06-02 17:23:16.984774 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:23:16.984878 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:23:16.984893 | orchestrator | Registering Redlock._release_script 2025-06-02 17:23:17.059884 | orchestrator | 2025-06-02 17:23:17 | INFO  | Task 8ea9d2dd-8508-4916-8ca9-b0d4b12b0ddf (network) was prepared for execution. 2025-06-02 17:23:17.061791 | orchestrator | 2025-06-02 17:23:17 | INFO  | It takes a moment until task 8ea9d2dd-8508-4916-8ca9-b0d4b12b0ddf (network) has been started and output is visible here. 2025-06-02 17:23:21.606668 | orchestrator | 2025-06-02 17:23:21.607063 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-02 17:23:21.608490 | orchestrator | 2025-06-02 17:23:21.610841 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-02 17:23:21.611922 | orchestrator | Monday 02 June 2025 17:23:21 +0000 (0:00:00.299) 0:00:00.299 *********** 2025-06-02 17:23:21.765688 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:21.843685 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:21.926330 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:22.005219 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:22.200973 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:22.337366 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:22.338425 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:22.338814 | orchestrator | 2025-06-02 17:23:22.339516 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-02 17:23:22.340490 | orchestrator | Monday 02 June 2025 17:23:22 +0000 (0:00:00.732) 0:00:01.032 *********** 2025-06-02 17:23:23.954514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:23:23.955124 | orchestrator | 2025-06-02 17:23:23.957622 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-02 17:23:23.958555 | orchestrator | Monday 02 June 2025 17:23:23 +0000 (0:00:01.614) 0:00:02.647 *********** 2025-06-02 17:23:26.004305 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:26.004582 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:26.006198 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:26.007513 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:26.008235 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:26.009480 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:26.012488 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:26.013282 | orchestrator | 2025-06-02 17:23:26.014167 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-02 17:23:26.014945 | orchestrator | Monday 02 June 2025 17:23:25 +0000 (0:00:02.052) 0:00:04.699 *********** 2025-06-02 17:23:27.743944 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:27.748984 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:27.749066 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:27.749080 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:27.749889 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:27.751043 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:27.752136 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:27.753040 | orchestrator | 2025-06-02 17:23:27.754271 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-02 17:23:27.755680 | orchestrator | Monday 02 June 2025 17:23:27 +0000 (0:00:01.735) 0:00:06.434 *********** 2025-06-02 17:23:28.287410 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-02 17:23:28.287820 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-02 17:23:28.753863 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-02 17:23:28.755873 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-02 17:23:28.759227 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-02 17:23:28.759264 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-02 17:23:28.759455 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-02 17:23:28.760261 | orchestrator | 2025-06-02 17:23:28.761855 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-02 17:23:28.762876 | orchestrator | Monday 02 June 2025 17:23:28 +0000 (0:00:01.016) 0:00:07.451 *********** 2025-06-02 17:23:32.401127 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:23:32.401290 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:23:32.402088 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:23:32.403501 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:23:32.404013 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:23:32.404578 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:23:32.405346 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:23:32.407041 | orchestrator | 2025-06-02 17:23:32.408667 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-02 17:23:32.412018 | orchestrator | Monday 02 June 2025 17:23:32 +0000 (0:00:03.637) 0:00:11.088 *********** 2025-06-02 17:23:34.019783 | orchestrator | changed: [testbed-manager] 2025-06-02 17:23:34.021375 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:34.022483 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:34.024069 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:34.025019 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:34.026014 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:34.026826 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:34.027933 | orchestrator | 2025-06-02 17:23:34.028482 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-02 17:23:34.029607 | orchestrator | Monday 02 June 2025 17:23:34 +0000 (0:00:01.626) 0:00:12.715 *********** 2025-06-02 17:23:35.977523 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:23:35.978272 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:23:35.979357 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:23:35.980698 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:23:35.983261 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:23:35.984120 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:23:35.985015 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:23:35.985824 | orchestrator | 2025-06-02 17:23:35.989323 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-02 17:23:35.989421 | orchestrator | Monday 02 June 2025 17:23:35 +0000 (0:00:01.958) 0:00:14.674 *********** 2025-06-02 17:23:36.423432 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:36.720868 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:37.168231 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:37.168706 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:37.170924 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:37.171048 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:37.171518 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:37.172550 | orchestrator | 2025-06-02 17:23:37.173213 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-02 17:23:37.173871 | orchestrator | Monday 02 June 2025 17:23:37 +0000 (0:00:01.186) 0:00:15.860 *********** 2025-06-02 17:23:37.349219 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:37.435826 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:37.523909 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:37.607828 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:37.717831 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:37.879473 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:37.880263 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:37.880325 | orchestrator | 2025-06-02 17:23:37.881119 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-02 17:23:37.881820 | orchestrator | Monday 02 June 2025 17:23:37 +0000 (0:00:00.715) 0:00:16.575 *********** 2025-06-02 17:23:40.019401 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:40.019952 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:40.020037 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:40.022077 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:40.023102 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:40.024005 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:40.024086 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:40.024647 | orchestrator | 2025-06-02 17:23:40.025315 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-02 17:23:40.026129 | orchestrator | Monday 02 June 2025 17:23:40 +0000 (0:00:02.134) 0:00:18.709 *********** 2025-06-02 17:23:40.287422 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:40.378836 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:40.466796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:40.555351 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:40.908885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:40.909356 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:40.910412 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-02 17:23:40.910930 | orchestrator | 2025-06-02 17:23:40.912391 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-02 17:23:40.913194 | orchestrator | Monday 02 June 2025 17:23:40 +0000 (0:00:00.897) 0:00:19.607 *********** 2025-06-02 17:23:42.643760 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:42.644832 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:23:42.648002 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:23:42.648072 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:23:42.650344 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:23:42.651373 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:23:42.652634 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:23:42.654512 | orchestrator | 2025-06-02 17:23:42.656364 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-02 17:23:42.657382 | orchestrator | Monday 02 June 2025 17:23:42 +0000 (0:00:01.729) 0:00:21.336 *********** 2025-06-02 17:23:43.981553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:23:43.982727 | orchestrator | 2025-06-02 17:23:43.985230 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 17:23:43.985275 | orchestrator | Monday 02 June 2025 17:23:43 +0000 (0:00:01.338) 0:00:22.675 *********** 2025-06-02 17:23:44.979423 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:44.979646 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:44.980617 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:44.981249 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:44.982328 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:44.982451 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:44.983040 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:44.983625 | orchestrator | 2025-06-02 17:23:44.984160 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-02 17:23:44.984912 | orchestrator | Monday 02 June 2025 17:23:44 +0000 (0:00:01.001) 0:00:23.676 *********** 2025-06-02 17:23:45.372484 | orchestrator | ok: [testbed-manager] 2025-06-02 17:23:45.481495 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:23:45.569317 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:23:45.658355 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:23:45.744661 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:23:45.897929 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:23:45.899149 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:23:45.902111 | orchestrator | 2025-06-02 17:23:45.902228 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 17:23:45.903281 | orchestrator | Monday 02 June 2025 17:23:45 +0000 (0:00:00.919) 0:00:24.595 *********** 2025-06-02 17:23:46.363505 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:46.363595 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:46.671922 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:46.673111 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:46.674483 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:47.152937 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:47.155901 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:47.157735 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:47.159255 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:47.160493 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:47.161653 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:47.162867 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:47.163346 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-02 17:23:47.164952 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-02 17:23:47.165784 | orchestrator | 2025-06-02 17:23:47.166467 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-02 17:23:47.167156 | orchestrator | Monday 02 June 2025 17:23:47 +0000 (0:00:01.249) 0:00:25.845 *********** 2025-06-02 17:23:47.321346 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:23:47.406391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:23:47.503358 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:23:47.595014 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:23:47.676811 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:23:47.812879 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:23:47.813673 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:23:47.818324 | orchestrator | 2025-06-02 17:23:47.819043 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-02 17:23:47.819230 | orchestrator | Monday 02 June 2025 17:23:47 +0000 (0:00:00.664) 0:00:26.509 *********** 2025-06-02 17:23:51.468499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-0, testbed-node-3, testbed-node-2, testbed-node-4, testbed-node-5 2025-06-02 17:23:51.468812 | orchestrator | 2025-06-02 17:23:51.472810 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-02 17:23:51.474467 | orchestrator | Monday 02 June 2025 17:23:51 +0000 (0:00:03.650) 0:00:30.160 *********** 2025-06-02 17:23:56.629636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.630394 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.632921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.634976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.636003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.636659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.637918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.638278 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.639187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:23:56.639918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.640707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.641250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.641735 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.642705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:23:56.643403 | orchestrator | 2025-06-02 17:23:56.643834 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-02 17:23:56.644882 | orchestrator | Monday 02 June 2025 17:23:56 +0000 (0:00:05.159) 0:00:35.320 *********** 2025-06-02 17:24:01.464701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.464808 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.464824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.464837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.464912 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.468817 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.468925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.473097 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-02 17:24:01.473140 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.473764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.476266 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.477323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.478861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.480688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-02 17:24:01.481847 | orchestrator | 2025-06-02 17:24:01.483230 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-02 17:24:01.484153 | orchestrator | Monday 02 June 2025 17:24:01 +0000 (0:00:04.835) 0:00:40.156 *********** 2025-06-02 17:24:02.856792 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:24:02.857976 | orchestrator | 2025-06-02 17:24:02.858904 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-02 17:24:02.859175 | orchestrator | Monday 02 June 2025 17:24:02 +0000 (0:00:01.394) 0:00:41.550 *********** 2025-06-02 17:24:03.346363 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:03.644767 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:04.075213 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:04.076870 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:04.080432 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:04.080471 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:04.080483 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:04.080496 | orchestrator | 2025-06-02 17:24:04.082647 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-02 17:24:04.082733 | orchestrator | Monday 02 June 2025 17:24:04 +0000 (0:00:01.222) 0:00:42.772 *********** 2025-06-02 17:24:04.164938 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.165867 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.167124 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.259018 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:04.259110 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.259529 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.262879 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.349578 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:24:04.349668 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:04.350639 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.352063 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.352257 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.451924 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:04.452134 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:04.453492 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.454288 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.455131 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.455800 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:04.556478 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:04.557764 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.558665 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.560355 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.561167 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:04.843771 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:04.845269 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:04.846843 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:04.848022 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:04.849582 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:06.232494 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:06.232601 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:06.238719 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-02 17:24:06.238762 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-02 17:24:06.238774 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-02 17:24:06.240156 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-02 17:24:06.241031 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:06.242364 | orchestrator | 2025-06-02 17:24:06.242400 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-02 17:24:06.243461 | orchestrator | Monday 02 June 2025 17:24:06 +0000 (0:00:02.153) 0:00:44.926 *********** 2025-06-02 17:24:06.407869 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:24:06.495732 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:06.597345 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:06.680634 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:06.768806 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:06.890814 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:06.891167 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:06.892749 | orchestrator | 2025-06-02 17:24:06.893806 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-02 17:24:06.895319 | orchestrator | Monday 02 June 2025 17:24:06 +0000 (0:00:00.661) 0:00:45.588 *********** 2025-06-02 17:24:07.076521 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:24:07.168897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:07.452865 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:07.549542 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:07.639305 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:07.688999 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:07.689126 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:07.689659 | orchestrator | 2025-06-02 17:24:07.690621 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:07.690667 | orchestrator | 2025-06-02 17:24:07 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:07.690724 | orchestrator | 2025-06-02 17:24:07 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:07.691326 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:24:07.692102 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.692533 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.692743 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.693636 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.693844 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.694837 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:24:07.695628 | orchestrator | 2025-06-02 17:24:07.696029 | orchestrator | 2025-06-02 17:24:07.697179 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:07.698119 | orchestrator | Monday 02 June 2025 17:24:07 +0000 (0:00:00.797) 0:00:46.385 *********** 2025-06-02 17:24:07.698381 | orchestrator | =============================================================================== 2025-06-02 17:24:07.699166 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 5.16s 2025-06-02 17:24:07.700401 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.84s 2025-06-02 17:24:07.700775 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.65s 2025-06-02 17:24:07.701588 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.64s 2025-06-02 17:24:07.702755 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.15s 2025-06-02 17:24:07.703551 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.13s 2025-06-02 17:24:07.703920 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.05s 2025-06-02 17:24:07.704723 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.96s 2025-06-02 17:24:07.705285 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.74s 2025-06-02 17:24:07.705761 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.73s 2025-06-02 17:24:07.706104 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.63s 2025-06-02 17:24:07.706690 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.61s 2025-06-02 17:24:07.706864 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.39s 2025-06-02 17:24:07.707346 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.34s 2025-06-02 17:24:07.708052 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.25s 2025-06-02 17:24:07.708420 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.22s 2025-06-02 17:24:07.708899 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2025-06-02 17:24:07.709582 | orchestrator | osism.commons.network : Create required directories --------------------- 1.02s 2025-06-02 17:24:07.709934 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.00s 2025-06-02 17:24:07.710791 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.92s 2025-06-02 17:24:08.467120 | orchestrator | + osism apply wireguard 2025-06-02 17:24:10.218622 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:10.218718 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:10.218733 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:10.285418 | orchestrator | 2025-06-02 17:24:10 | INFO  | Task bd00da1f-199b-4fc3-b26b-0035749cfceb (wireguard) was prepared for execution. 2025-06-02 17:24:10.285841 | orchestrator | 2025-06-02 17:24:10 | INFO  | It takes a moment until task bd00da1f-199b-4fc3-b26b-0035749cfceb (wireguard) has been started and output is visible here. 2025-06-02 17:24:15.045554 | orchestrator | 2025-06-02 17:24:15.046085 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-02 17:24:15.048646 | orchestrator | 2025-06-02 17:24:15.051087 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-02 17:24:15.051378 | orchestrator | Monday 02 June 2025 17:24:15 +0000 (0:00:00.345) 0:00:00.345 *********** 2025-06-02 17:24:16.694644 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:16.696310 | orchestrator | 2025-06-02 17:24:16.696345 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-02 17:24:16.697488 | orchestrator | Monday 02 June 2025 17:24:16 +0000 (0:00:01.651) 0:00:01.996 *********** 2025-06-02 17:24:22.436794 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:22.437565 | orchestrator | 2025-06-02 17:24:22.438815 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-02 17:24:22.440328 | orchestrator | Monday 02 June 2025 17:24:22 +0000 (0:00:05.743) 0:00:07.740 *********** 2025-06-02 17:24:22.971829 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:22.972106 | orchestrator | 2025-06-02 17:24:22.973396 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-02 17:24:22.973607 | orchestrator | Monday 02 June 2025 17:24:22 +0000 (0:00:00.536) 0:00:08.276 *********** 2025-06-02 17:24:23.393417 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:23.394325 | orchestrator | 2025-06-02 17:24:23.395034 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-02 17:24:23.395774 | orchestrator | Monday 02 June 2025 17:24:23 +0000 (0:00:00.421) 0:00:08.698 *********** 2025-06-02 17:24:23.921882 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:23.922341 | orchestrator | 2025-06-02 17:24:23.923580 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-02 17:24:23.924608 | orchestrator | Monday 02 June 2025 17:24:23 +0000 (0:00:00.525) 0:00:09.224 *********** 2025-06-02 17:24:24.471728 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:24.471833 | orchestrator | 2025-06-02 17:24:24.472609 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-02 17:24:24.474092 | orchestrator | Monday 02 June 2025 17:24:24 +0000 (0:00:00.548) 0:00:09.772 *********** 2025-06-02 17:24:24.883505 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:24.883614 | orchestrator | 2025-06-02 17:24:24.883631 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-02 17:24:24.883779 | orchestrator | Monday 02 June 2025 17:24:24 +0000 (0:00:00.414) 0:00:10.186 *********** 2025-06-02 17:24:26.012631 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:26.013291 | orchestrator | 2025-06-02 17:24:26.014560 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-02 17:24:26.015525 | orchestrator | Monday 02 June 2025 17:24:26 +0000 (0:00:01.129) 0:00:11.316 *********** 2025-06-02 17:24:26.817603 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-02 17:24:26.818382 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:26.819004 | orchestrator | 2025-06-02 17:24:26.819394 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-02 17:24:26.820107 | orchestrator | Monday 02 June 2025 17:24:26 +0000 (0:00:00.805) 0:00:12.121 *********** 2025-06-02 17:24:28.450225 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:28.450371 | orchestrator | 2025-06-02 17:24:28.450399 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-02 17:24:28.450734 | orchestrator | Monday 02 June 2025 17:24:28 +0000 (0:00:01.630) 0:00:13.752 *********** 2025-06-02 17:24:29.342533 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:29.342667 | orchestrator | 2025-06-02 17:24:29.343385 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:29.343489 | orchestrator | 2025-06-02 17:24:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:29.343563 | orchestrator | 2025-06-02 17:24:29 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:29.344754 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:24:29.345517 | orchestrator | 2025-06-02 17:24:29.346238 | orchestrator | 2025-06-02 17:24:29.346414 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:29.346758 | orchestrator | Monday 02 June 2025 17:24:29 +0000 (0:00:00.895) 0:00:14.647 *********** 2025-06-02 17:24:29.347048 | orchestrator | =============================================================================== 2025-06-02 17:24:29.347357 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.74s 2025-06-02 17:24:29.347609 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.65s 2025-06-02 17:24:29.347900 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.63s 2025-06-02 17:24:29.348859 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.13s 2025-06-02 17:24:29.349670 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.90s 2025-06-02 17:24:29.350310 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.81s 2025-06-02 17:24:29.351367 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.55s 2025-06-02 17:24:29.352021 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-06-02 17:24:29.352688 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-06-02 17:24:29.353176 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.42s 2025-06-02 17:24:29.353955 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.41s 2025-06-02 17:24:29.814339 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-02 17:24:29.851079 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-02 17:24:29.851168 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-02 17:24:29.923595 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 192 0 --:--:-- --:--:-- --:--:-- 194 2025-06-02 17:24:29.935876 | orchestrator | + osism apply --environment custom workarounds 2025-06-02 17:24:31.460639 | orchestrator | 2025-06-02 17:24:31 | INFO  | Trying to run play workarounds in environment custom 2025-06-02 17:24:31.464848 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:31.464892 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:31.464906 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:31.522249 | orchestrator | 2025-06-02 17:24:31 | INFO  | Task f623a5c4-4d78-4b03-a1c6-10ef5d9c406f (workarounds) was prepared for execution. 2025-06-02 17:24:31.522313 | orchestrator | 2025-06-02 17:24:31 | INFO  | It takes a moment until task f623a5c4-4d78-4b03-a1c6-10ef5d9c406f (workarounds) has been started and output is visible here. 2025-06-02 17:24:35.765046 | orchestrator | 2025-06-02 17:24:35.766280 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:24:35.770747 | orchestrator | 2025-06-02 17:24:35.772087 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-02 17:24:35.773362 | orchestrator | Monday 02 June 2025 17:24:35 +0000 (0:00:00.153) 0:00:00.153 *********** 2025-06-02 17:24:35.940069 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-02 17:24:36.027079 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-02 17:24:36.114779 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-02 17:24:36.202389 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-02 17:24:36.405714 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-02 17:24:36.587251 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-02 17:24:36.587761 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-02 17:24:36.588765 | orchestrator | 2025-06-02 17:24:36.589879 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-02 17:24:36.590448 | orchestrator | 2025-06-02 17:24:36.591072 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 17:24:36.591617 | orchestrator | Monday 02 June 2025 17:24:36 +0000 (0:00:00.819) 0:00:00.973 *********** 2025-06-02 17:24:39.245147 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:39.245396 | orchestrator | 2025-06-02 17:24:39.247268 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-02 17:24:39.248299 | orchestrator | 2025-06-02 17:24:39.249068 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-02 17:24:39.249624 | orchestrator | Monday 02 June 2025 17:24:39 +0000 (0:00:02.658) 0:00:03.631 *********** 2025-06-02 17:24:41.122493 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:41.122608 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:41.122624 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:41.122739 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:41.126352 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:41.126865 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:41.127476 | orchestrator | 2025-06-02 17:24:41.127954 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-02 17:24:41.128423 | orchestrator | 2025-06-02 17:24:41.128756 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-02 17:24:41.129479 | orchestrator | Monday 02 June 2025 17:24:41 +0000 (0:00:01.872) 0:00:05.504 *********** 2025-06-02 17:24:42.642886 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.644577 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.645315 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.646246 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.647167 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.648937 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-02 17:24:42.650405 | orchestrator | 2025-06-02 17:24:42.653688 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-02 17:24:42.654647 | orchestrator | Monday 02 June 2025 17:24:42 +0000 (0:00:01.525) 0:00:07.029 *********** 2025-06-02 17:24:46.510174 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:46.510381 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:46.510470 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:46.511978 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:46.513563 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:46.514699 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:46.515245 | orchestrator | 2025-06-02 17:24:46.515948 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-02 17:24:46.516693 | orchestrator | Monday 02 June 2025 17:24:46 +0000 (0:00:03.867) 0:00:10.897 *********** 2025-06-02 17:24:46.690619 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:46.775296 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:46.855552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:46.936430 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:47.286194 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:47.286437 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:47.288187 | orchestrator | 2025-06-02 17:24:47.289721 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-02 17:24:47.291759 | orchestrator | 2025-06-02 17:24:47.293227 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-02 17:24:47.294813 | orchestrator | Monday 02 June 2025 17:24:47 +0000 (0:00:00.777) 0:00:11.674 *********** 2025-06-02 17:24:48.980334 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:48.981600 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:48.982593 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:48.983604 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:48.987092 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:48.990595 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:48.990767 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:48.994336 | orchestrator | 2025-06-02 17:24:48.997511 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-02 17:24:48.998074 | orchestrator | Monday 02 June 2025 17:24:48 +0000 (0:00:01.689) 0:00:13.364 *********** 2025-06-02 17:24:50.674776 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:50.675009 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:50.676094 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:50.677142 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:50.677836 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:50.678529 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:50.679381 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:50.679801 | orchestrator | 2025-06-02 17:24:50.680477 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-02 17:24:50.681105 | orchestrator | Monday 02 June 2025 17:24:50 +0000 (0:00:01.695) 0:00:15.060 *********** 2025-06-02 17:24:52.290820 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:52.291371 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:52.294457 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:52.296087 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:52.297321 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:52.298102 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:52.301264 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:52.302073 | orchestrator | 2025-06-02 17:24:52.302402 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-02 17:24:52.303806 | orchestrator | Monday 02 June 2025 17:24:52 +0000 (0:00:01.619) 0:00:16.679 *********** 2025-06-02 17:24:54.109421 | orchestrator | changed: [testbed-manager] 2025-06-02 17:24:54.109805 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:24:54.111237 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:24:54.112053 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:24:54.112824 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:24:54.113760 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:24:54.114595 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:24:54.115102 | orchestrator | 2025-06-02 17:24:54.116705 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-02 17:24:54.117703 | orchestrator | Monday 02 June 2025 17:24:54 +0000 (0:00:01.815) 0:00:18.495 *********** 2025-06-02 17:24:54.286098 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:24:54.369584 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:24:54.450377 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:24:54.530118 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:24:54.607219 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:24:54.742567 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:24:54.743275 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:24:54.745475 | orchestrator | 2025-06-02 17:24:54.748447 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-02 17:24:54.749200 | orchestrator | 2025-06-02 17:24:54.750380 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-02 17:24:54.750790 | orchestrator | Monday 02 June 2025 17:24:54 +0000 (0:00:00.635) 0:00:19.131 *********** 2025-06-02 17:24:57.360929 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:24:57.361547 | orchestrator | ok: [testbed-manager] 2025-06-02 17:24:57.363052 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:24:57.363648 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:24:57.364551 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:24:57.367085 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:24:57.367548 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:24:57.368646 | orchestrator | 2025-06-02 17:24:57.369278 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:24:57.369663 | orchestrator | 2025-06-02 17:24:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:24:57.369921 | orchestrator | 2025-06-02 17:24:57 | INFO  | Please wait and do not abort execution. 2025-06-02 17:24:57.371104 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:24:57.371912 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.372399 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.373076 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.373519 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.374112 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.374979 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:24:57.375412 | orchestrator | 2025-06-02 17:24:57.376117 | orchestrator | 2025-06-02 17:24:57.376731 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:24:57.377721 | orchestrator | Monday 02 June 2025 17:24:57 +0000 (0:00:02.618) 0:00:21.750 *********** 2025-06-02 17:24:57.378099 | orchestrator | =============================================================================== 2025-06-02 17:24:57.378799 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2025-06-02 17:24:57.379679 | orchestrator | Apply netplan configuration --------------------------------------------- 2.66s 2025-06-02 17:24:57.380605 | orchestrator | Install python3-docker -------------------------------------------------- 2.62s 2025-06-02 17:24:57.381578 | orchestrator | Apply netplan configuration --------------------------------------------- 1.87s 2025-06-02 17:24:57.381928 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.82s 2025-06-02 17:24:57.382377 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.70s 2025-06-02 17:24:57.383072 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.69s 2025-06-02 17:24:57.383427 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.62s 2025-06-02 17:24:57.383812 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.53s 2025-06-02 17:24:57.384468 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.82s 2025-06-02 17:24:57.385065 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.78s 2025-06-02 17:24:57.385359 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.64s 2025-06-02 17:24:58.083805 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-02 17:24:59.848041 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:24:59.848134 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:24:59.848148 | orchestrator | Registering Redlock._release_script 2025-06-02 17:24:59.928209 | orchestrator | 2025-06-02 17:24:59 | INFO  | Task aed2f8a7-d5be-447c-9d8f-f0ad3437cd7f (reboot) was prepared for execution. 2025-06-02 17:24:59.928296 | orchestrator | 2025-06-02 17:24:59 | INFO  | It takes a moment until task aed2f8a7-d5be-447c-9d8f-f0ad3437cd7f (reboot) has been started and output is visible here. 2025-06-02 17:25:04.372673 | orchestrator | 2025-06-02 17:25:04.375865 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:04.377616 | orchestrator | 2025-06-02 17:25:04.378119 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:04.379079 | orchestrator | Monday 02 June 2025 17:25:04 +0000 (0:00:00.222) 0:00:00.222 *********** 2025-06-02 17:25:04.473648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:25:04.474461 | orchestrator | 2025-06-02 17:25:04.476090 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:04.476123 | orchestrator | Monday 02 June 2025 17:25:04 +0000 (0:00:00.104) 0:00:00.326 *********** 2025-06-02 17:25:05.433805 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:25:05.434626 | orchestrator | 2025-06-02 17:25:05.435401 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:05.436451 | orchestrator | Monday 02 June 2025 17:25:05 +0000 (0:00:00.957) 0:00:01.284 *********** 2025-06-02 17:25:05.563736 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:25:05.564178 | orchestrator | 2025-06-02 17:25:05.564747 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:05.565746 | orchestrator | 2025-06-02 17:25:05.566677 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:05.568739 | orchestrator | Monday 02 June 2025 17:25:05 +0000 (0:00:00.131) 0:00:01.415 *********** 2025-06-02 17:25:05.657819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:25:05.658919 | orchestrator | 2025-06-02 17:25:05.661003 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:05.661032 | orchestrator | Monday 02 June 2025 17:25:05 +0000 (0:00:00.096) 0:00:01.511 *********** 2025-06-02 17:25:06.316650 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:25:06.316803 | orchestrator | 2025-06-02 17:25:06.317778 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:06.318005 | orchestrator | Monday 02 June 2025 17:25:06 +0000 (0:00:00.659) 0:00:02.170 *********** 2025-06-02 17:25:06.443861 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:25:06.444835 | orchestrator | 2025-06-02 17:25:06.445971 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:06.446826 | orchestrator | 2025-06-02 17:25:06.447450 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:06.447980 | orchestrator | Monday 02 June 2025 17:25:06 +0000 (0:00:00.123) 0:00:02.293 *********** 2025-06-02 17:25:06.660858 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:25:06.661603 | orchestrator | 2025-06-02 17:25:06.662117 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:06.662786 | orchestrator | Monday 02 June 2025 17:25:06 +0000 (0:00:00.221) 0:00:02.515 *********** 2025-06-02 17:25:07.282642 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:25:07.283057 | orchestrator | 2025-06-02 17:25:07.283803 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:07.284677 | orchestrator | Monday 02 June 2025 17:25:07 +0000 (0:00:00.620) 0:00:03.135 *********** 2025-06-02 17:25:07.404230 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:25:07.405288 | orchestrator | 2025-06-02 17:25:07.406514 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:07.407478 | orchestrator | 2025-06-02 17:25:07.409274 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:07.409623 | orchestrator | Monday 02 June 2025 17:25:07 +0000 (0:00:00.119) 0:00:03.254 *********** 2025-06-02 17:25:07.505276 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:25:07.506225 | orchestrator | 2025-06-02 17:25:07.507032 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:07.507748 | orchestrator | Monday 02 June 2025 17:25:07 +0000 (0:00:00.103) 0:00:03.358 *********** 2025-06-02 17:25:08.145846 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:25:08.146911 | orchestrator | 2025-06-02 17:25:08.147979 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:08.149315 | orchestrator | Monday 02 June 2025 17:25:08 +0000 (0:00:00.641) 0:00:03.999 *********** 2025-06-02 17:25:08.257195 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:25:08.258116 | orchestrator | 2025-06-02 17:25:08.259017 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:08.260763 | orchestrator | 2025-06-02 17:25:08.262127 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:08.262510 | orchestrator | Monday 02 June 2025 17:25:08 +0000 (0:00:00.108) 0:00:04.108 *********** 2025-06-02 17:25:08.358243 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:25:08.359311 | orchestrator | 2025-06-02 17:25:08.360668 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:08.361187 | orchestrator | Monday 02 June 2025 17:25:08 +0000 (0:00:00.103) 0:00:04.211 *********** 2025-06-02 17:25:09.064116 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:25:09.064852 | orchestrator | 2025-06-02 17:25:09.065678 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:09.066990 | orchestrator | Monday 02 June 2025 17:25:09 +0000 (0:00:00.703) 0:00:04.914 *********** 2025-06-02 17:25:09.193822 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:25:09.195345 | orchestrator | 2025-06-02 17:25:09.195481 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-02 17:25:09.198934 | orchestrator | 2025-06-02 17:25:09.199021 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-02 17:25:09.199037 | orchestrator | Monday 02 June 2025 17:25:09 +0000 (0:00:00.132) 0:00:05.047 *********** 2025-06-02 17:25:09.302413 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:25:09.302644 | orchestrator | 2025-06-02 17:25:09.304139 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-02 17:25:09.304514 | orchestrator | Monday 02 June 2025 17:25:09 +0000 (0:00:00.108) 0:00:05.155 *********** 2025-06-02 17:25:09.998272 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:25:10.000059 | orchestrator | 2025-06-02 17:25:10.000791 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-02 17:25:10.001993 | orchestrator | Monday 02 June 2025 17:25:09 +0000 (0:00:00.695) 0:00:05.851 *********** 2025-06-02 17:25:10.042295 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:25:10.043351 | orchestrator | 2025-06-02 17:25:10.044029 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:25:10.044720 | orchestrator | 2025-06-02 17:25:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:25:10.045863 | orchestrator | 2025-06-02 17:25:10 | INFO  | Please wait and do not abort execution. 2025-06-02 17:25:10.046802 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.047761 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.048800 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.049286 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.050295 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.051048 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:25:10.051839 | orchestrator | 2025-06-02 17:25:10.052221 | orchestrator | 2025-06-02 17:25:10.052940 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:25:10.054376 | orchestrator | Monday 02 June 2025 17:25:10 +0000 (0:00:00.044) 0:00:05.895 *********** 2025-06-02 17:25:10.055138 | orchestrator | =============================================================================== 2025-06-02 17:25:10.056329 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-06-02 17:25:10.056601 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.74s 2025-06-02 17:25:10.057358 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.66s 2025-06-02 17:25:10.691172 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-02 17:25:12.321799 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:25:12.321865 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:25:12.321976 | orchestrator | Registering Redlock._release_script 2025-06-02 17:25:12.389478 | orchestrator | 2025-06-02 17:25:12 | INFO  | Task 260a995a-df7a-4243-8d5c-c62ec2902763 (wait-for-connection) was prepared for execution. 2025-06-02 17:25:12.389555 | orchestrator | 2025-06-02 17:25:12 | INFO  | It takes a moment until task 260a995a-df7a-4243-8d5c-c62ec2902763 (wait-for-connection) has been started and output is visible here. 2025-06-02 17:25:16.201187 | orchestrator | 2025-06-02 17:25:16.202452 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-02 17:25:16.207165 | orchestrator | 2025-06-02 17:25:16.208871 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-02 17:25:16.209927 | orchestrator | Monday 02 June 2025 17:25:16 +0000 (0:00:00.264) 0:00:00.264 *********** 2025-06-02 17:25:28.571484 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:25:28.571625 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:25:28.571642 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:25:28.572395 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:25:28.573442 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:25:28.575494 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:25:28.576225 | orchestrator | 2025-06-02 17:25:28.576911 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:25:28.577108 | orchestrator | 2025-06-02 17:25:28 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:25:28.577666 | orchestrator | 2025-06-02 17:25:28 | INFO  | Please wait and do not abort execution. 2025-06-02 17:25:28.578468 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.579382 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.580154 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.580520 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.584379 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.584852 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:28.585451 | orchestrator | 2025-06-02 17:25:28.585922 | orchestrator | 2025-06-02 17:25:28.586451 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:25:28.586805 | orchestrator | Monday 02 June 2025 17:25:28 +0000 (0:00:12.368) 0:00:12.633 *********** 2025-06-02 17:25:28.587259 | orchestrator | =============================================================================== 2025-06-02 17:25:28.587639 | orchestrator | Wait until remote system is reachable ---------------------------------- 12.37s 2025-06-02 17:25:29.005644 | orchestrator | + osism apply hddtemp 2025-06-02 17:25:30.507972 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:25:30.508074 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:25:30.508089 | orchestrator | Registering Redlock._release_script 2025-06-02 17:25:30.559906 | orchestrator | 2025-06-02 17:25:30 | INFO  | Task 2d5081d0-c666-4e56-b5a6-1db3751ba945 (hddtemp) was prepared for execution. 2025-06-02 17:25:30.559966 | orchestrator | 2025-06-02 17:25:30 | INFO  | It takes a moment until task 2d5081d0-c666-4e56-b5a6-1db3751ba945 (hddtemp) has been started and output is visible here. 2025-06-02 17:25:34.410093 | orchestrator | 2025-06-02 17:25:34.410206 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-02 17:25:34.410223 | orchestrator | 2025-06-02 17:25:34.410235 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-02 17:25:34.411636 | orchestrator | Monday 02 June 2025 17:25:34 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 17:25:34.590973 | orchestrator | ok: [testbed-manager] 2025-06-02 17:25:34.664810 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:25:34.740198 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:25:34.812306 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:25:35.006263 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:25:35.159545 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:25:35.160167 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:25:35.160879 | orchestrator | 2025-06-02 17:25:35.161826 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-02 17:25:35.162468 | orchestrator | Monday 02 June 2025 17:25:35 +0000 (0:00:00.752) 0:00:01.013 *********** 2025-06-02 17:25:36.529430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:25:36.532566 | orchestrator | 2025-06-02 17:25:36.533092 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-02 17:25:36.534141 | orchestrator | Monday 02 June 2025 17:25:36 +0000 (0:00:01.366) 0:00:02.380 *********** 2025-06-02 17:25:38.517733 | orchestrator | ok: [testbed-manager] 2025-06-02 17:25:38.518721 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:25:38.519348 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:25:38.520237 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:25:38.521137 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:25:38.522351 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:25:38.524185 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:25:38.524689 | orchestrator | 2025-06-02 17:25:38.525429 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-02 17:25:38.526224 | orchestrator | Monday 02 June 2025 17:25:38 +0000 (0:00:01.992) 0:00:04.372 *********** 2025-06-02 17:25:39.212532 | orchestrator | changed: [testbed-manager] 2025-06-02 17:25:39.287725 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:25:39.716953 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:25:39.718334 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:25:39.719746 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:25:39.722632 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:25:39.723625 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:25:39.724783 | orchestrator | 2025-06-02 17:25:39.725710 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-02 17:25:39.726644 | orchestrator | Monday 02 June 2025 17:25:39 +0000 (0:00:01.197) 0:00:05.570 *********** 2025-06-02 17:25:40.793293 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:25:40.795252 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:25:40.796024 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:25:40.797594 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:25:40.798495 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:25:40.799500 | orchestrator | ok: [testbed-manager] 2025-06-02 17:25:40.800453 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:25:40.801231 | orchestrator | 2025-06-02 17:25:40.801562 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-02 17:25:40.802277 | orchestrator | Monday 02 June 2025 17:25:40 +0000 (0:00:01.079) 0:00:06.649 *********** 2025-06-02 17:25:41.163764 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:25:41.239202 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:25:41.307837 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:25:41.379379 | orchestrator | changed: [testbed-manager] 2025-06-02 17:25:41.488810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:25:41.488917 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:25:41.489663 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:25:41.490080 | orchestrator | 2025-06-02 17:25:41.491236 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-02 17:25:41.492298 | orchestrator | Monday 02 June 2025 17:25:41 +0000 (0:00:00.694) 0:00:07.344 *********** 2025-06-02 17:25:53.572402 | orchestrator | changed: [testbed-manager] 2025-06-02 17:25:53.572546 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:25:53.572756 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:25:53.573300 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:25:53.574005 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:25:53.575062 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:25:53.576012 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:25:53.576444 | orchestrator | 2025-06-02 17:25:53.577279 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-02 17:25:53.577989 | orchestrator | Monday 02 June 2025 17:25:53 +0000 (0:00:12.083) 0:00:19.427 *********** 2025-06-02 17:25:54.779339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:25:54.779861 | orchestrator | 2025-06-02 17:25:54.781327 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-02 17:25:54.782341 | orchestrator | Monday 02 June 2025 17:25:54 +0000 (0:00:01.205) 0:00:20.633 *********** 2025-06-02 17:25:56.555745 | orchestrator | changed: [testbed-manager] 2025-06-02 17:25:56.555912 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:25:56.556353 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:25:56.556965 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:25:56.557338 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:25:56.557866 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:25:56.559542 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:25:56.559758 | orchestrator | 2025-06-02 17:25:56.560798 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:25:56.560935 | orchestrator | 2025-06-02 17:25:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:25:56.560955 | orchestrator | 2025-06-02 17:25:56 | INFO  | Please wait and do not abort execution. 2025-06-02 17:25:56.561176 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:25:56.561648 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.562170 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.563718 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.564109 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.564762 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.567621 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:25:56.568006 | orchestrator | 2025-06-02 17:25:56.571512 | orchestrator | 2025-06-02 17:25:56.571957 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:25:56.574493 | orchestrator | Monday 02 June 2025 17:25:56 +0000 (0:00:01.777) 0:00:22.410 *********** 2025-06-02 17:25:56.575015 | orchestrator | =============================================================================== 2025-06-02 17:25:56.575671 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.08s 2025-06-02 17:25:56.578919 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.99s 2025-06-02 17:25:56.579219 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.78s 2025-06-02 17:25:56.579740 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.37s 2025-06-02 17:25:56.580234 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.21s 2025-06-02 17:25:56.580704 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.20s 2025-06-02 17:25:56.581146 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.08s 2025-06-02 17:25:56.581603 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.75s 2025-06-02 17:25:56.582299 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-06-02 17:25:57 | orchestrator | ++ semver latest 7.1.1 2025-06-02 17:25:57.048491 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 17:25:57.048557 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 17:25:57.048572 | orchestrator | + sudo systemctl restart manager.service 2025-06-02 17:26:10.625859 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 17:26:10.625984 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-02 17:26:10.626000 | orchestrator | + local max_attempts=60 2025-06-02 17:26:10.626097 | orchestrator | + local name=ceph-ansible 2025-06-02 17:26:10.626111 | orchestrator | + local attempt_num=1 2025-06-02 17:26:10.626123 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:10.665514 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:10.665599 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:10.665615 | orchestrator | + sleep 5 2025-06-02 17:26:15.669782 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:15.706363 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:15.706429 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:15.706443 | orchestrator | + sleep 5 2025-06-02 17:26:20.710283 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:20.747593 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:20.747675 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:20.747720 | orchestrator | + sleep 5 2025-06-02 17:26:25.751478 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:25.787261 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:25.787369 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:25.787395 | orchestrator | + sleep 5 2025-06-02 17:26:30.791881 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:30.839239 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:30.839310 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:30.839324 | orchestrator | + sleep 5 2025-06-02 17:26:35.843876 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:35.879651 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:35.879702 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:35.879707 | orchestrator | + sleep 5 2025-06-02 17:26:40.907910 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:40.928016 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:40.928109 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:40.928121 | orchestrator | + sleep 5 2025-06-02 17:26:45.933090 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:45.975191 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:45.975281 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:45.975297 | orchestrator | + sleep 5 2025-06-02 17:26:50.982759 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:51.036882 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:51.036983 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:51.036998 | orchestrator | + sleep 5 2025-06-02 17:26:56.038149 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:26:56.076562 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:26:56.076644 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:26:56.076659 | orchestrator | + sleep 5 2025-06-02 17:27:01.080709 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:27:01.123008 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:01.123112 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:27:01.123126 | orchestrator | + sleep 5 2025-06-02 17:27:06.128316 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:27:06.169536 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:06.169613 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:27:06.169622 | orchestrator | + sleep 5 2025-06-02 17:27:11.174467 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:27:11.204681 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:11.204810 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-02 17:27:11.204839 | orchestrator | + sleep 5 2025-06-02 17:27:16.209084 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-02 17:27:16.254491 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:16.254561 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-02 17:27:16.254576 | orchestrator | + local max_attempts=60 2025-06-02 17:27:16.254589 | orchestrator | + local name=kolla-ansible 2025-06-02 17:27:16.254600 | orchestrator | + local attempt_num=1 2025-06-02 17:27:16.255095 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-02 17:27:16.293422 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:16.293495 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-02 17:27:16.293511 | orchestrator | + local max_attempts=60 2025-06-02 17:27:16.293525 | orchestrator | + local name=osism-ansible 2025-06-02 17:27:16.293537 | orchestrator | + local attempt_num=1 2025-06-02 17:27:16.294229 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-02 17:27:16.340143 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-02 17:27:16.340191 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-02 17:27:16.340204 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-02 17:27:16.533745 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-02 17:27:16.695391 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-02 17:27:16.868974 | orchestrator | ARA in osism-ansible already disabled. 2025-06-02 17:27:17.018269 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-02 17:27:17.019423 | orchestrator | + osism apply gather-facts 2025-06-02 17:27:18.865358 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:27:18.865516 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:27:18.865549 | orchestrator | Registering Redlock._release_script 2025-06-02 17:27:18.943122 | orchestrator | 2025-06-02 17:27:18 | INFO  | Task 4303da66-f0ee-49fc-98ea-c647b7666516 (gather-facts) was prepared for execution. 2025-06-02 17:27:18.943249 | orchestrator | 2025-06-02 17:27:18 | INFO  | It takes a moment until task 4303da66-f0ee-49fc-98ea-c647b7666516 (gather-facts) has been started and output is visible here. 2025-06-02 17:27:23.221801 | orchestrator | 2025-06-02 17:27:23.224596 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:27:23.227044 | orchestrator | 2025-06-02 17:27:23.228641 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:27:23.230344 | orchestrator | Monday 02 June 2025 17:27:23 +0000 (0:00:00.247) 0:00:00.247 *********** 2025-06-02 17:27:29.270604 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:27:29.270728 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:27:29.270801 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:27:29.273558 | orchestrator | ok: [testbed-manager] 2025-06-02 17:27:29.273654 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:27:29.276567 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:27:29.277593 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:27:29.277994 | orchestrator | 2025-06-02 17:27:29.281846 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:27:29.282555 | orchestrator | 2025-06-02 17:27:29.282790 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:27:29.284694 | orchestrator | Monday 02 June 2025 17:27:29 +0000 (0:00:06.044) 0:00:06.292 *********** 2025-06-02 17:27:29.419708 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:27:29.500170 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:27:29.580801 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:27:29.662155 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:27:29.758299 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:27:29.804102 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:27:29.804577 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:27:29.804904 | orchestrator | 2025-06-02 17:27:29.806488 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:27:29.807825 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.807849 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.807861 | orchestrator | 2025-06-02 17:27:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:27:29.807873 | orchestrator | 2025-06-02 17:27:29 | INFO  | Please wait and do not abort execution. 2025-06-02 17:27:29.807885 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.810297 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.810692 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.812171 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.813430 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:27:29.814277 | orchestrator | 2025-06-02 17:27:29.816381 | orchestrator | 2025-06-02 17:27:29.816511 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:27:29.816613 | orchestrator | Monday 02 June 2025 17:27:29 +0000 (0:00:00.542) 0:00:06.835 *********** 2025-06-02 17:27:29.817131 | orchestrator | =============================================================================== 2025-06-02 17:27:29.817944 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.04s 2025-06-02 17:27:29.818547 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-02 17:27:30.515705 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-02 17:27:30.535568 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-02 17:27:30.549727 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-02 17:27:30.567076 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-02 17:27:30.582974 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-02 17:27:30.596439 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-02 17:27:30.606347 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-02 17:27:30.618911 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-02 17:27:30.631367 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-02 17:27:30.647064 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-02 17:27:30.659187 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-02 17:27:30.670503 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-02 17:27:30.684428 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-02 17:27:30.704050 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-02 17:27:30.723329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-02 17:27:30.742067 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-02 17:27:30.758127 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-02 17:27:30.779641 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-02 17:27:30.791349 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-02 17:27:30.806106 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-02 17:27:30.820993 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-02 17:27:31.179297 | orchestrator | ok: Runtime: 0:20:45.392203 2025-06-02 17:27:31.278647 | 2025-06-02 17:27:31.278804 | TASK [Deploy services] 2025-06-02 17:27:31.810504 | orchestrator | skipping: Conditional result was False 2025-06-02 17:27:31.828362 | 2025-06-02 17:27:31.828546 | TASK [Deploy in a nutshell] 2025-06-02 17:27:32.533165 | orchestrator | + set -e 2025-06-02 17:27:32.533376 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 17:27:32.533417 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 17:27:32.533440 | orchestrator | ++ INTERACTIVE=false 2025-06-02 17:27:32.533454 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 17:27:32.533468 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 17:27:32.533481 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 17:27:32.533528 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 17:27:32.533557 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 17:27:32.533572 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 17:27:32.533587 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 17:27:32.533600 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 17:27:32.533618 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 17:27:32.533629 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 17:27:32.533650 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 17:27:32.533661 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 17:27:32.533677 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 17:27:32.533688 | orchestrator | ++ export ARA=false 2025-06-02 17:27:32.533700 | orchestrator | ++ ARA=false 2025-06-02 17:27:32.533726 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 17:27:32.533837 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 17:27:32.533856 | orchestrator | ++ export TEMPEST=false 2025-06-02 17:27:32.533867 | orchestrator | ++ TEMPEST=false 2025-06-02 17:27:32.533878 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 17:27:32.533889 | orchestrator | ++ IS_ZUUL=true 2025-06-02 17:27:32.533900 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:27:32.533911 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 17:27:32.533923 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 17:27:32.533933 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 17:27:32.533944 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 17:27:32.533954 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 17:27:32.533965 | orchestrator | 2025-06-02 17:27:32.533976 | orchestrator | # PULL IMAGES 2025-06-02 17:27:32.533988 | orchestrator | 2025-06-02 17:27:32.533999 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 17:27:32.534010 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 17:27:32.534117 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 17:27:32.534137 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 17:27:32.534148 | orchestrator | + echo 2025-06-02 17:27:32.534159 | orchestrator | + echo '# PULL IMAGES' 2025-06-02 17:27:32.534169 | orchestrator | + echo 2025-06-02 17:27:32.534686 | orchestrator | ++ semver latest 7.0.0 2025-06-02 17:27:32.592378 | orchestrator | + [[ -1 -ge 0 ]] 2025-06-02 17:27:32.592437 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 17:27:32.592453 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-02 17:27:34.261622 | orchestrator | 2025-06-02 17:27:34 | INFO  | Trying to run play pull-images in environment custom 2025-06-02 17:27:34.266394 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:27:34.267141 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:27:34.267151 | orchestrator | Registering Redlock._release_script 2025-06-02 17:27:34.326282 | orchestrator | 2025-06-02 17:27:34 | INFO  | Task 3e78208f-44e4-4a3c-95e9-81c60abe3316 (pull-images) was prepared for execution. 2025-06-02 17:27:34.326386 | orchestrator | 2025-06-02 17:27:34 | INFO  | It takes a moment until task 3e78208f-44e4-4a3c-95e9-81c60abe3316 (pull-images) has been started and output is visible here. 2025-06-02 17:27:38.410220 | orchestrator | 2025-06-02 17:27:38.412563 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-02 17:27:38.413649 | orchestrator | 2025-06-02 17:27:38.415893 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-02 17:27:38.416226 | orchestrator | Monday 02 June 2025 17:27:38 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-02 17:28:48.593847 | orchestrator | changed: [testbed-manager] 2025-06-02 17:28:48.593994 | orchestrator | 2025-06-02 17:28:48.594934 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-02 17:28:48.595530 | orchestrator | Monday 02 June 2025 17:28:48 +0000 (0:01:10.186) 0:01:10.346 *********** 2025-06-02 17:29:43.494257 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-02 17:29:43.494372 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-02 17:29:43.498430 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-02 17:29:43.498515 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-02 17:29:43.498776 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-02 17:29:43.500571 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-02 17:29:43.500796 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-02 17:29:43.501087 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-02 17:29:43.501164 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-02 17:29:43.501504 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-02 17:29:43.502725 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-02 17:29:43.502746 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-02 17:29:43.502756 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-02 17:29:43.503771 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-02 17:29:43.503907 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-02 17:29:43.504173 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-02 17:29:43.504474 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-02 17:29:43.506146 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-02 17:29:43.506685 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-02 17:29:43.509319 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-02 17:29:43.509335 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-02 17:29:43.509340 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-02 17:29:43.509542 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-02 17:29:43.510302 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-02 17:29:43.511092 | orchestrator | 2025-06-02 17:29:43.511766 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:29:43.512295 | orchestrator | 2025-06-02 17:29:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:29:43.512307 | orchestrator | 2025-06-02 17:29:43 | INFO  | Please wait and do not abort execution. 2025-06-02 17:29:43.513779 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:29:43.514346 | orchestrator | 2025-06-02 17:29:43.515187 | orchestrator | 2025-06-02 17:29:43.516913 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:29:43.517264 | orchestrator | Monday 02 June 2025 17:29:43 +0000 (0:00:54.898) 0:02:05.245 *********** 2025-06-02 17:29:43.517316 | orchestrator | =============================================================================== 2025-06-02 17:29:43.517842 | orchestrator | Pull keystone image ---------------------------------------------------- 70.19s 2025-06-02 17:29:43.518101 | orchestrator | Pull other images ------------------------------------------------------ 54.90s 2025-06-02 17:29:45.915524 | orchestrator | 2025-06-02 17:29:45 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-02 17:29:45.919574 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:29:45.919643 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:29:45.919660 | orchestrator | Registering Redlock._release_script 2025-06-02 17:29:45.978595 | orchestrator | 2025-06-02 17:29:45 | INFO  | Task 2989c1ff-3187-47ad-9221-f3e6cfe72e30 (wipe-partitions) was prepared for execution. 2025-06-02 17:29:45.978718 | orchestrator | 2025-06-02 17:29:45 | INFO  | It takes a moment until task 2989c1ff-3187-47ad-9221-f3e6cfe72e30 (wipe-partitions) has been started and output is visible here. 2025-06-02 17:29:49.684219 | orchestrator | 2025-06-02 17:29:49.685799 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-02 17:29:49.685866 | orchestrator | 2025-06-02 17:29:49.686308 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-02 17:29:49.686738 | orchestrator | Monday 02 June 2025 17:29:49 +0000 (0:00:00.140) 0:00:00.140 *********** 2025-06-02 17:29:50.248289 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:29:50.248384 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:29:50.248394 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:29:50.248401 | orchestrator | 2025-06-02 17:29:50.248750 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-02 17:29:50.248908 | orchestrator | Monday 02 June 2025 17:29:50 +0000 (0:00:00.578) 0:00:00.718 *********** 2025-06-02 17:29:50.386611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:50.469666 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:50.471502 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:50.471893 | orchestrator | 2025-06-02 17:29:50.472408 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-02 17:29:50.472561 | orchestrator | Monday 02 June 2025 17:29:50 +0000 (0:00:00.222) 0:00:00.941 *********** 2025-06-02 17:29:51.124025 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:29:51.124165 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:29:51.124182 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:29:51.124253 | orchestrator | 2025-06-02 17:29:51.124451 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-02 17:29:51.124698 | orchestrator | Monday 02 June 2025 17:29:51 +0000 (0:00:00.652) 0:00:01.594 *********** 2025-06-02 17:29:51.288956 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:29:51.375405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:29:51.375495 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:29:51.375508 | orchestrator | 2025-06-02 17:29:51.375521 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-02 17:29:51.375534 | orchestrator | Monday 02 June 2025 17:29:51 +0000 (0:00:00.247) 0:00:01.841 *********** 2025-06-02 17:29:52.575200 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:29:52.575333 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:29:52.576315 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:29:52.576444 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:29:52.576832 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:29:52.576865 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:29:52.576984 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:29:52.578430 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:29:52.580365 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:29:52.580415 | orchestrator | 2025-06-02 17:29:52.583360 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-02 17:29:52.587327 | orchestrator | Monday 02 June 2025 17:29:52 +0000 (0:00:01.205) 0:00:03.046 *********** 2025-06-02 17:29:53.956257 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:29:53.959176 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:29:53.961189 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:29:53.963446 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:29:53.965058 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:29:53.966265 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:29:53.967993 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:29:53.968021 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:29:53.968033 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:29:53.968134 | orchestrator | 2025-06-02 17:29:53.968865 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-02 17:29:53.969215 | orchestrator | Monday 02 June 2025 17:29:53 +0000 (0:00:01.377) 0:00:04.424 *********** 2025-06-02 17:29:56.262498 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-02 17:29:56.265017 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-02 17:29:56.265810 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-02 17:29:56.267178 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-02 17:29:56.272406 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-02 17:29:56.272435 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-02 17:29:56.272447 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-02 17:29:56.273249 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-02 17:29:56.274327 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-02 17:29:56.275057 | orchestrator | 2025-06-02 17:29:56.275927 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-02 17:29:56.276832 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:02.305) 0:00:06.730 *********** 2025-06-02 17:29:56.839142 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:29:56.839322 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:29:56.839535 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:29:56.843310 | orchestrator | 2025-06-02 17:29:56.843349 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-02 17:29:56.843364 | orchestrator | Monday 02 June 2025 17:29:56 +0000 (0:00:00.578) 0:00:07.308 *********** 2025-06-02 17:29:57.446873 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:29:57.448052 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:29:57.448753 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:29:57.449780 | orchestrator | 2025-06-02 17:29:57.450396 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:29:57.451061 | orchestrator | 2025-06-02 17:29:57 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:29:57.451337 | orchestrator | 2025-06-02 17:29:57 | INFO  | Please wait and do not abort execution. 2025-06-02 17:29:57.452289 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:57.452633 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:57.453683 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:29:57.454431 | orchestrator | 2025-06-02 17:29:57.455163 | orchestrator | 2025-06-02 17:29:57.456084 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:29:57.457092 | orchestrator | Monday 02 June 2025 17:29:57 +0000 (0:00:00.607) 0:00:07.916 *********** 2025-06-02 17:29:57.457762 | orchestrator | =============================================================================== 2025-06-02 17:29:57.458903 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.31s 2025-06-02 17:29:57.458936 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.38s 2025-06-02 17:29:57.459979 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-06-02 17:29:57.460040 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.65s 2025-06-02 17:29:57.460788 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-06-02 17:29:57.461143 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2025-06-02 17:29:57.461666 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-06-02 17:29:57.461998 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-06-02 17:29:57.462471 | orchestrator | Remove all rook related logical devices --------------------------------- 0.22s 2025-06-02 17:29:59.456118 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:29:59.456223 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:29:59.456239 | orchestrator | Registering Redlock._release_script 2025-06-02 17:29:59.508210 | orchestrator | 2025-06-02 17:29:59 | INFO  | Task b6e36ef1-e10e-4fa5-ba9a-102651695007 (facts) was prepared for execution. 2025-06-02 17:29:59.508407 | orchestrator | 2025-06-02 17:29:59 | INFO  | It takes a moment until task b6e36ef1-e10e-4fa5-ba9a-102651695007 (facts) has been started and output is visible here. 2025-06-02 17:30:03.312462 | orchestrator | 2025-06-02 17:30:03.314472 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 17:30:03.314513 | orchestrator | 2025-06-02 17:30:03.315640 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:30:03.316548 | orchestrator | Monday 02 June 2025 17:30:03 +0000 (0:00:00.264) 0:00:00.264 *********** 2025-06-02 17:30:04.301498 | orchestrator | ok: [testbed-manager] 2025-06-02 17:30:04.306297 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:30:04.306344 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:30:04.307509 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:30:04.308937 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:04.311094 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:04.311235 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:30:04.312851 | orchestrator | 2025-06-02 17:30:04.313664 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:30:04.315745 | orchestrator | Monday 02 June 2025 17:30:04 +0000 (0:00:00.991) 0:00:01.256 *********** 2025-06-02 17:30:04.468912 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:30:04.542574 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:30:04.618403 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:30:04.696027 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:30:04.766513 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:05.416672 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:05.417943 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:05.419388 | orchestrator | 2025-06-02 17:30:05.420755 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:30:05.421807 | orchestrator | 2025-06-02 17:30:05.422909 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:30:05.423703 | orchestrator | Monday 02 June 2025 17:30:05 +0000 (0:00:01.118) 0:00:02.374 *********** 2025-06-02 17:30:10.168085 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:30:10.168173 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:30:10.168228 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:30:10.169120 | orchestrator | ok: [testbed-manager] 2025-06-02 17:30:10.169330 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:10.169803 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:30:10.170289 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:10.170539 | orchestrator | 2025-06-02 17:30:10.170886 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:30:10.171411 | orchestrator | 2025-06-02 17:30:10.171785 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:30:10.172178 | orchestrator | Monday 02 June 2025 17:30:10 +0000 (0:00:04.751) 0:00:07.126 *********** 2025-06-02 17:30:10.309397 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:30:10.382975 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:30:10.450676 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:30:10.527324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:30:10.601785 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:10.634461 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:10.635850 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:10.638959 | orchestrator | 2025-06-02 17:30:10.638994 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:30:10.639195 | orchestrator | 2025-06-02 17:30:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:30:10.639542 | orchestrator | 2025-06-02 17:30:10 | INFO  | Please wait and do not abort execution. 2025-06-02 17:30:10.640169 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.641585 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.642406 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.643467 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.644929 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.645418 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.646461 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:30:10.646861 | orchestrator | 2025-06-02 17:30:10.647740 | orchestrator | 2025-06-02 17:30:10.648292 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:30:10.648786 | orchestrator | Monday 02 June 2025 17:30:10 +0000 (0:00:00.466) 0:00:07.592 *********** 2025-06-02 17:30:10.649432 | orchestrator | =============================================================================== 2025-06-02 17:30:10.650007 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2025-06-02 17:30:10.654515 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.12s 2025-06-02 17:30:10.654546 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.99s 2025-06-02 17:30:10.654557 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2025-06-02 17:30:13.846170 | orchestrator | 2025-06-02 17:30:13 | INFO  | Task 025d16fe-945e-445e-bac3-54f8ff1a48c6 (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-02 17:30:13.846262 | orchestrator | 2025-06-02 17:30:13 | INFO  | It takes a moment until task 025d16fe-945e-445e-bac3-54f8ff1a48c6 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-02 17:30:19.366101 | orchestrator | 2025-06-02 17:30:19.366420 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:30:19.367455 | orchestrator | 2025-06-02 17:30:19.369513 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:30:19.369539 | orchestrator | Monday 02 June 2025 17:30:19 +0000 (0:00:00.521) 0:00:00.521 *********** 2025-06-02 17:30:19.646864 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:19.647706 | orchestrator | 2025-06-02 17:30:19.648626 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:30:19.648950 | orchestrator | Monday 02 June 2025 17:30:19 +0000 (0:00:00.281) 0:00:00.803 *********** 2025-06-02 17:30:19.917349 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:19.920040 | orchestrator | 2025-06-02 17:30:19.922947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:19.925417 | orchestrator | Monday 02 June 2025 17:30:19 +0000 (0:00:00.271) 0:00:01.074 *********** 2025-06-02 17:30:20.336206 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:30:20.339335 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:30:20.340111 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:30:20.345276 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:30:20.345656 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:30:20.347805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:30:20.348767 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:30:20.350505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:30:20.351620 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 17:30:20.352850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:30:20.354822 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:30:20.355115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:30:20.356022 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:30:20.356851 | orchestrator | 2025-06-02 17:30:20.357093 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:20.357149 | orchestrator | Monday 02 June 2025 17:30:20 +0000 (0:00:00.417) 0:00:01.492 *********** 2025-06-02 17:30:21.055417 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:21.057633 | orchestrator | 2025-06-02 17:30:21.061582 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:21.062897 | orchestrator | Monday 02 June 2025 17:30:21 +0000 (0:00:00.721) 0:00:02.213 *********** 2025-06-02 17:30:21.362162 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:21.363377 | orchestrator | 2025-06-02 17:30:21.364274 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:21.365311 | orchestrator | Monday 02 June 2025 17:30:21 +0000 (0:00:00.304) 0:00:02.518 *********** 2025-06-02 17:30:21.602775 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:21.603931 | orchestrator | 2025-06-02 17:30:21.605330 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:21.606558 | orchestrator | Monday 02 June 2025 17:30:21 +0000 (0:00:00.241) 0:00:02.759 *********** 2025-06-02 17:30:21.801650 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:21.801738 | orchestrator | 2025-06-02 17:30:21.801814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:21.802197 | orchestrator | Monday 02 June 2025 17:30:21 +0000 (0:00:00.200) 0:00:02.960 *********** 2025-06-02 17:30:21.997398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:21.998118 | orchestrator | 2025-06-02 17:30:21.999999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:22.000028 | orchestrator | Monday 02 June 2025 17:30:21 +0000 (0:00:00.195) 0:00:03.156 *********** 2025-06-02 17:30:22.216917 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:22.217887 | orchestrator | 2025-06-02 17:30:22.217932 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:22.217958 | orchestrator | Monday 02 June 2025 17:30:22 +0000 (0:00:00.216) 0:00:03.372 *********** 2025-06-02 17:30:22.439089 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:22.439192 | orchestrator | 2025-06-02 17:30:22.439946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:22.440875 | orchestrator | Monday 02 June 2025 17:30:22 +0000 (0:00:00.219) 0:00:03.592 *********** 2025-06-02 17:30:22.630762 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:22.630948 | orchestrator | 2025-06-02 17:30:22.631444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:22.631670 | orchestrator | Monday 02 June 2025 17:30:22 +0000 (0:00:00.193) 0:00:03.786 *********** 2025-06-02 17:30:23.107705 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12) 2025-06-02 17:30:23.108174 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12) 2025-06-02 17:30:23.108203 | orchestrator | 2025-06-02 17:30:23.108995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:23.109071 | orchestrator | Monday 02 June 2025 17:30:23 +0000 (0:00:00.479) 0:00:04.266 *********** 2025-06-02 17:30:23.546278 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a) 2025-06-02 17:30:23.552389 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a) 2025-06-02 17:30:23.552433 | orchestrator | 2025-06-02 17:30:23.552446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:23.552459 | orchestrator | Monday 02 June 2025 17:30:23 +0000 (0:00:00.438) 0:00:04.704 *********** 2025-06-02 17:30:24.209748 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140) 2025-06-02 17:30:24.209892 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140) 2025-06-02 17:30:24.210102 | orchestrator | 2025-06-02 17:30:24.213491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:24.213682 | orchestrator | Monday 02 June 2025 17:30:24 +0000 (0:00:00.664) 0:00:05.368 *********** 2025-06-02 17:30:24.932854 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f) 2025-06-02 17:30:24.933029 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f) 2025-06-02 17:30:24.936476 | orchestrator | 2025-06-02 17:30:24.939103 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:24.940267 | orchestrator | Monday 02 June 2025 17:30:24 +0000 (0:00:00.722) 0:00:06.091 *********** 2025-06-02 17:30:25.794937 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:30:25.795083 | orchestrator | 2025-06-02 17:30:25.797397 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:25.797731 | orchestrator | Monday 02 June 2025 17:30:25 +0000 (0:00:00.860) 0:00:06.951 *********** 2025-06-02 17:30:26.260343 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:30:26.260453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:30:26.260467 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:30:26.263507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:30:26.263985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:30:26.265171 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:30:26.265775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:30:26.266204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:30:26.267333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 17:30:26.267635 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:30:26.268042 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:30:26.269166 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:30:26.269742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:30:26.271116 | orchestrator | 2025-06-02 17:30:26.271453 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:26.271930 | orchestrator | Monday 02 June 2025 17:30:26 +0000 (0:00:00.466) 0:00:07.417 *********** 2025-06-02 17:30:26.487566 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:26.487830 | orchestrator | 2025-06-02 17:30:26.488579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:26.490128 | orchestrator | Monday 02 June 2025 17:30:26 +0000 (0:00:00.228) 0:00:07.645 *********** 2025-06-02 17:30:26.697430 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:26.698477 | orchestrator | 2025-06-02 17:30:26.700318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:26.700423 | orchestrator | Monday 02 June 2025 17:30:26 +0000 (0:00:00.210) 0:00:07.856 *********** 2025-06-02 17:30:26.919879 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:26.924304 | orchestrator | 2025-06-02 17:30:26.924356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:26.924371 | orchestrator | Monday 02 June 2025 17:30:26 +0000 (0:00:00.219) 0:00:08.076 *********** 2025-06-02 17:30:27.137335 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:27.137439 | orchestrator | 2025-06-02 17:30:27.138203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:27.138784 | orchestrator | Monday 02 June 2025 17:30:27 +0000 (0:00:00.220) 0:00:08.296 *********** 2025-06-02 17:30:27.337069 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:27.338233 | orchestrator | 2025-06-02 17:30:27.338329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:27.338347 | orchestrator | Monday 02 June 2025 17:30:27 +0000 (0:00:00.195) 0:00:08.492 *********** 2025-06-02 17:30:27.525785 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:27.525969 | orchestrator | 2025-06-02 17:30:27.526369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:27.526657 | orchestrator | Monday 02 June 2025 17:30:27 +0000 (0:00:00.189) 0:00:08.681 *********** 2025-06-02 17:30:27.741632 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:27.743286 | orchestrator | 2025-06-02 17:30:27.743869 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:27.744102 | orchestrator | Monday 02 June 2025 17:30:27 +0000 (0:00:00.218) 0:00:08.900 *********** 2025-06-02 17:30:27.946953 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:27.948444 | orchestrator | 2025-06-02 17:30:27.949069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:27.949665 | orchestrator | Monday 02 June 2025 17:30:27 +0000 (0:00:00.200) 0:00:09.101 *********** 2025-06-02 17:30:29.064961 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 17:30:29.066430 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 17:30:29.070095 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 17:30:29.070458 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 17:30:29.070996 | orchestrator | 2025-06-02 17:30:29.072442 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:29.073547 | orchestrator | Monday 02 June 2025 17:30:29 +0000 (0:00:01.117) 0:00:10.219 *********** 2025-06-02 17:30:29.283915 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:29.286285 | orchestrator | 2025-06-02 17:30:29.286325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:29.286341 | orchestrator | Monday 02 June 2025 17:30:29 +0000 (0:00:00.220) 0:00:10.439 *********** 2025-06-02 17:30:29.485040 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:29.486627 | orchestrator | 2025-06-02 17:30:29.486733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:29.488443 | orchestrator | Monday 02 June 2025 17:30:29 +0000 (0:00:00.204) 0:00:10.643 *********** 2025-06-02 17:30:29.712497 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:29.712607 | orchestrator | 2025-06-02 17:30:29.712891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:29.714169 | orchestrator | Monday 02 June 2025 17:30:29 +0000 (0:00:00.226) 0:00:10.870 *********** 2025-06-02 17:30:30.084009 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:30.084172 | orchestrator | 2025-06-02 17:30:30.086993 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:30:30.087943 | orchestrator | Monday 02 June 2025 17:30:30 +0000 (0:00:00.372) 0:00:11.242 *********** 2025-06-02 17:30:30.283455 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:30:30.283569 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:30:30.285140 | orchestrator | 2025-06-02 17:30:30.285467 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:30:30.285837 | orchestrator | Monday 02 June 2025 17:30:30 +0000 (0:00:00.197) 0:00:11.439 *********** 2025-06-02 17:30:30.442699 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:30.443474 | orchestrator | 2025-06-02 17:30:30.444260 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:30:30.444284 | orchestrator | Monday 02 June 2025 17:30:30 +0000 (0:00:00.161) 0:00:11.601 *********** 2025-06-02 17:30:30.646743 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:30.647329 | orchestrator | 2025-06-02 17:30:30.647861 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:30:30.650356 | orchestrator | Monday 02 June 2025 17:30:30 +0000 (0:00:00.200) 0:00:11.802 *********** 2025-06-02 17:30:30.849821 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:30.849941 | orchestrator | 2025-06-02 17:30:30.850501 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:30:30.855148 | orchestrator | Monday 02 June 2025 17:30:30 +0000 (0:00:00.202) 0:00:12.004 *********** 2025-06-02 17:30:31.034347 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:31.034452 | orchestrator | 2025-06-02 17:30:31.035265 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:30:31.035607 | orchestrator | Monday 02 June 2025 17:30:31 +0000 (0:00:00.185) 0:00:12.190 *********** 2025-06-02 17:30:31.230526 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8450978f-95f9-56a8-b94f-b89f59985534'}}) 2025-06-02 17:30:31.231308 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4af7f5ab-70f7-5f81-8195-4d6574833a1e'}}) 2025-06-02 17:30:31.234320 | orchestrator | 2025-06-02 17:30:31.235808 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:30:31.238811 | orchestrator | Monday 02 June 2025 17:30:31 +0000 (0:00:00.197) 0:00:12.387 *********** 2025-06-02 17:30:31.391960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8450978f-95f9-56a8-b94f-b89f59985534'}})  2025-06-02 17:30:31.392371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4af7f5ab-70f7-5f81-8195-4d6574833a1e'}})  2025-06-02 17:30:31.392611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:31.392918 | orchestrator | 2025-06-02 17:30:31.393195 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:30:31.393439 | orchestrator | Monday 02 June 2025 17:30:31 +0000 (0:00:00.161) 0:00:12.548 *********** 2025-06-02 17:30:31.769185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8450978f-95f9-56a8-b94f-b89f59985534'}})  2025-06-02 17:30:31.769474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4af7f5ab-70f7-5f81-8195-4d6574833a1e'}})  2025-06-02 17:30:31.770139 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:31.775101 | orchestrator | 2025-06-02 17:30:31.777530 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:30:31.777844 | orchestrator | Monday 02 June 2025 17:30:31 +0000 (0:00:00.378) 0:00:12.927 *********** 2025-06-02 17:30:31.945854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8450978f-95f9-56a8-b94f-b89f59985534'}})  2025-06-02 17:30:31.945958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4af7f5ab-70f7-5f81-8195-4d6574833a1e'}})  2025-06-02 17:30:31.946142 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:31.949674 | orchestrator | 2025-06-02 17:30:31.951323 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:30:31.952055 | orchestrator | Monday 02 June 2025 17:30:31 +0000 (0:00:00.175) 0:00:13.102 *********** 2025-06-02 17:30:32.102632 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:32.102820 | orchestrator | 2025-06-02 17:30:32.103260 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:30:32.103635 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.156) 0:00:13.259 *********** 2025-06-02 17:30:32.246294 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:30:32.247169 | orchestrator | 2025-06-02 17:30:32.250315 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:30:32.253362 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.144) 0:00:13.403 *********** 2025-06-02 17:30:32.388949 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:32.389884 | orchestrator | 2025-06-02 17:30:32.389920 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:30:32.391328 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.140) 0:00:13.544 *********** 2025-06-02 17:30:32.525174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:32.526123 | orchestrator | 2025-06-02 17:30:32.531802 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:30:32.532376 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.138) 0:00:13.682 *********** 2025-06-02 17:30:32.672244 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:32.672479 | orchestrator | 2025-06-02 17:30:32.673348 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:30:32.673881 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.148) 0:00:13.830 *********** 2025-06-02 17:30:32.807147 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:30:32.807368 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:30:32.809489 | orchestrator |  "sdb": { 2025-06-02 17:30:32.811866 | orchestrator |  "osd_lvm_uuid": "8450978f-95f9-56a8-b94f-b89f59985534" 2025-06-02 17:30:32.812039 | orchestrator |  }, 2025-06-02 17:30:32.812361 | orchestrator |  "sdc": { 2025-06-02 17:30:32.815778 | orchestrator |  "osd_lvm_uuid": "4af7f5ab-70f7-5f81-8195-4d6574833a1e" 2025-06-02 17:30:32.816041 | orchestrator |  } 2025-06-02 17:30:32.816537 | orchestrator |  } 2025-06-02 17:30:32.817190 | orchestrator | } 2025-06-02 17:30:32.817734 | orchestrator | 2025-06-02 17:30:32.818299 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:30:32.818757 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.135) 0:00:13.966 *********** 2025-06-02 17:30:32.953993 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:32.954277 | orchestrator | 2025-06-02 17:30:32.957374 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:30:32.957935 | orchestrator | Monday 02 June 2025 17:30:32 +0000 (0:00:00.143) 0:00:14.110 *********** 2025-06-02 17:30:33.089297 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:33.089496 | orchestrator | 2025-06-02 17:30:33.090734 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:30:33.091692 | orchestrator | Monday 02 June 2025 17:30:33 +0000 (0:00:00.138) 0:00:14.248 *********** 2025-06-02 17:30:33.218874 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:30:33.219110 | orchestrator | 2025-06-02 17:30:33.219615 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:30:33.220111 | orchestrator | Monday 02 June 2025 17:30:33 +0000 (0:00:00.127) 0:00:14.376 *********** 2025-06-02 17:30:33.413999 | orchestrator | changed: [testbed-node-3] => { 2025-06-02 17:30:33.414761 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:30:33.415434 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:30:33.417233 | orchestrator |  "sdb": { 2025-06-02 17:30:33.417418 | orchestrator |  "osd_lvm_uuid": "8450978f-95f9-56a8-b94f-b89f59985534" 2025-06-02 17:30:33.417954 | orchestrator |  }, 2025-06-02 17:30:33.419100 | orchestrator |  "sdc": { 2025-06-02 17:30:33.421238 | orchestrator |  "osd_lvm_uuid": "4af7f5ab-70f7-5f81-8195-4d6574833a1e" 2025-06-02 17:30:33.422944 | orchestrator |  } 2025-06-02 17:30:33.423772 | orchestrator |  }, 2025-06-02 17:30:33.424462 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:30:33.425626 | orchestrator |  { 2025-06-02 17:30:33.427961 | orchestrator |  "data": "osd-block-8450978f-95f9-56a8-b94f-b89f59985534", 2025-06-02 17:30:33.429783 | orchestrator |  "data_vg": "ceph-8450978f-95f9-56a8-b94f-b89f59985534" 2025-06-02 17:30:33.429875 | orchestrator |  }, 2025-06-02 17:30:33.430922 | orchestrator |  { 2025-06-02 17:30:33.432166 | orchestrator |  "data": "osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e", 2025-06-02 17:30:33.433268 | orchestrator |  "data_vg": "ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e" 2025-06-02 17:30:33.434333 | orchestrator |  } 2025-06-02 17:30:33.436016 | orchestrator |  ] 2025-06-02 17:30:33.436424 | orchestrator |  } 2025-06-02 17:30:33.436990 | orchestrator | } 2025-06-02 17:30:33.438361 | orchestrator | 2025-06-02 17:30:33.439699 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:30:33.440503 | orchestrator | Monday 02 June 2025 17:30:33 +0000 (0:00:00.195) 0:00:14.571 *********** 2025-06-02 17:30:35.942164 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:35.945515 | orchestrator | 2025-06-02 17:30:35.946120 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:30:35.947234 | orchestrator | 2025-06-02 17:30:35.948315 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:30:35.949399 | orchestrator | Monday 02 June 2025 17:30:35 +0000 (0:00:02.526) 0:00:17.098 *********** 2025-06-02 17:30:36.210697 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:36.210918 | orchestrator | 2025-06-02 17:30:36.216091 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:30:36.218553 | orchestrator | Monday 02 June 2025 17:30:36 +0000 (0:00:00.269) 0:00:17.368 *********** 2025-06-02 17:30:36.446503 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:36.447186 | orchestrator | 2025-06-02 17:30:36.449032 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:36.450926 | orchestrator | Monday 02 June 2025 17:30:36 +0000 (0:00:00.236) 0:00:17.604 *********** 2025-06-02 17:30:36.811838 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:30:36.813037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:30:36.817064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:30:36.817097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:30:36.817349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:30:36.817869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:30:36.818403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:30:36.819274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:30:36.819508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 17:30:36.819861 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:30:36.820361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:30:36.820849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:30:36.821298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:30:36.822098 | orchestrator | 2025-06-02 17:30:36.822639 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:36.823406 | orchestrator | Monday 02 June 2025 17:30:36 +0000 (0:00:00.365) 0:00:17.969 *********** 2025-06-02 17:30:37.015286 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:37.016112 | orchestrator | 2025-06-02 17:30:37.017627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:37.022546 | orchestrator | Monday 02 June 2025 17:30:37 +0000 (0:00:00.203) 0:00:18.173 *********** 2025-06-02 17:30:37.217245 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:37.217828 | orchestrator | 2025-06-02 17:30:37.219884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:37.223676 | orchestrator | Monday 02 June 2025 17:30:37 +0000 (0:00:00.202) 0:00:18.375 *********** 2025-06-02 17:30:37.416879 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:37.417881 | orchestrator | 2025-06-02 17:30:37.421490 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:37.422369 | orchestrator | Monday 02 June 2025 17:30:37 +0000 (0:00:00.199) 0:00:18.574 *********** 2025-06-02 17:30:37.605746 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:37.605965 | orchestrator | 2025-06-02 17:30:37.608191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:37.609947 | orchestrator | Monday 02 June 2025 17:30:37 +0000 (0:00:00.187) 0:00:18.761 *********** 2025-06-02 17:30:38.260003 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:38.260167 | orchestrator | 2025-06-02 17:30:38.260188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:38.260268 | orchestrator | Monday 02 June 2025 17:30:38 +0000 (0:00:00.656) 0:00:19.418 *********** 2025-06-02 17:30:38.483338 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:38.483492 | orchestrator | 2025-06-02 17:30:38.483572 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:38.485467 | orchestrator | Monday 02 June 2025 17:30:38 +0000 (0:00:00.221) 0:00:19.640 *********** 2025-06-02 17:30:38.700373 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:38.701398 | orchestrator | 2025-06-02 17:30:38.702561 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:38.707035 | orchestrator | Monday 02 June 2025 17:30:38 +0000 (0:00:00.218) 0:00:19.858 *********** 2025-06-02 17:30:38.974463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:38.974765 | orchestrator | 2025-06-02 17:30:38.977205 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:38.977217 | orchestrator | Monday 02 June 2025 17:30:38 +0000 (0:00:00.272) 0:00:20.130 *********** 2025-06-02 17:30:39.419290 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84) 2025-06-02 17:30:39.419800 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84) 2025-06-02 17:30:39.420824 | orchestrator | 2025-06-02 17:30:39.421469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:39.421876 | orchestrator | Monday 02 June 2025 17:30:39 +0000 (0:00:00.447) 0:00:20.578 *********** 2025-06-02 17:30:39.861372 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3) 2025-06-02 17:30:39.862535 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3) 2025-06-02 17:30:39.864184 | orchestrator | 2025-06-02 17:30:39.866116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:39.870065 | orchestrator | Monday 02 June 2025 17:30:39 +0000 (0:00:00.437) 0:00:21.016 *********** 2025-06-02 17:30:40.314717 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62) 2025-06-02 17:30:40.314839 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62) 2025-06-02 17:30:40.318326 | orchestrator | 2025-06-02 17:30:40.319302 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:40.319840 | orchestrator | Monday 02 June 2025 17:30:40 +0000 (0:00:00.454) 0:00:21.470 *********** 2025-06-02 17:30:40.757160 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4) 2025-06-02 17:30:40.760204 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4) 2025-06-02 17:30:40.760238 | orchestrator | 2025-06-02 17:30:40.762909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:40.764425 | orchestrator | Monday 02 June 2025 17:30:40 +0000 (0:00:00.442) 0:00:21.912 *********** 2025-06-02 17:30:41.070250 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:30:41.070407 | orchestrator | 2025-06-02 17:30:41.070498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:41.070515 | orchestrator | Monday 02 June 2025 17:30:41 +0000 (0:00:00.314) 0:00:22.227 *********** 2025-06-02 17:30:41.454721 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:30:41.456254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:30:41.460217 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:30:41.460273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:30:41.460312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:30:41.461697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:30:41.463104 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:30:41.464023 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:30:41.465304 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 17:30:41.466369 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:30:41.467004 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:30:41.468187 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:30:41.469443 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:30:41.470279 | orchestrator | 2025-06-02 17:30:41.471369 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:41.472384 | orchestrator | Monday 02 June 2025 17:30:41 +0000 (0:00:00.384) 0:00:22.611 *********** 2025-06-02 17:30:41.671171 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:41.674832 | orchestrator | 2025-06-02 17:30:41.674899 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:41.674911 | orchestrator | Monday 02 June 2025 17:30:41 +0000 (0:00:00.214) 0:00:22.826 *********** 2025-06-02 17:30:42.383110 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:42.384786 | orchestrator | 2025-06-02 17:30:42.387122 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:42.388942 | orchestrator | Monday 02 June 2025 17:30:42 +0000 (0:00:00.714) 0:00:23.540 *********** 2025-06-02 17:30:42.634941 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:42.635093 | orchestrator | 2025-06-02 17:30:42.635182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:42.635541 | orchestrator | Monday 02 June 2025 17:30:42 +0000 (0:00:00.250) 0:00:23.791 *********** 2025-06-02 17:30:42.873473 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:42.873739 | orchestrator | 2025-06-02 17:30:42.874387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:42.875152 | orchestrator | Monday 02 June 2025 17:30:42 +0000 (0:00:00.236) 0:00:24.027 *********** 2025-06-02 17:30:43.178655 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:43.178736 | orchestrator | 2025-06-02 17:30:43.179083 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:43.180381 | orchestrator | Monday 02 June 2025 17:30:43 +0000 (0:00:00.309) 0:00:24.337 *********** 2025-06-02 17:30:43.400923 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:43.401752 | orchestrator | 2025-06-02 17:30:43.407325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:43.407350 | orchestrator | Monday 02 June 2025 17:30:43 +0000 (0:00:00.221) 0:00:24.558 *********** 2025-06-02 17:30:43.609838 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:43.609942 | orchestrator | 2025-06-02 17:30:43.610011 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:43.610983 | orchestrator | Monday 02 June 2025 17:30:43 +0000 (0:00:00.204) 0:00:24.762 *********** 2025-06-02 17:30:43.802209 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:43.802645 | orchestrator | 2025-06-02 17:30:43.803174 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:43.803832 | orchestrator | Monday 02 June 2025 17:30:43 +0000 (0:00:00.195) 0:00:24.958 *********** 2025-06-02 17:30:44.447729 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 17:30:44.447840 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 17:30:44.448427 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 17:30:44.448975 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 17:30:44.449455 | orchestrator | 2025-06-02 17:30:44.449792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:44.450163 | orchestrator | Monday 02 June 2025 17:30:44 +0000 (0:00:00.647) 0:00:25.606 *********** 2025-06-02 17:30:44.675558 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:44.675765 | orchestrator | 2025-06-02 17:30:44.676522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:44.677321 | orchestrator | Monday 02 June 2025 17:30:44 +0000 (0:00:00.227) 0:00:25.833 *********** 2025-06-02 17:30:44.933892 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:44.934830 | orchestrator | 2025-06-02 17:30:44.935370 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:44.935852 | orchestrator | Monday 02 June 2025 17:30:44 +0000 (0:00:00.256) 0:00:26.090 *********** 2025-06-02 17:30:45.135542 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:45.135765 | orchestrator | 2025-06-02 17:30:45.136281 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:45.136873 | orchestrator | Monday 02 June 2025 17:30:45 +0000 (0:00:00.203) 0:00:26.293 *********** 2025-06-02 17:30:45.398086 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:45.398293 | orchestrator | 2025-06-02 17:30:45.400080 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:30:45.400383 | orchestrator | Monday 02 June 2025 17:30:45 +0000 (0:00:00.262) 0:00:26.555 *********** 2025-06-02 17:30:45.828087 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:30:45.828753 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:30:45.829073 | orchestrator | 2025-06-02 17:30:45.829565 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:30:45.830129 | orchestrator | Monday 02 June 2025 17:30:45 +0000 (0:00:00.430) 0:00:26.986 *********** 2025-06-02 17:30:45.969467 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:45.970289 | orchestrator | 2025-06-02 17:30:45.971226 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:30:45.973792 | orchestrator | Monday 02 June 2025 17:30:45 +0000 (0:00:00.141) 0:00:27.128 *********** 2025-06-02 17:30:46.113734 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:46.116977 | orchestrator | 2025-06-02 17:30:46.118186 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:30:46.118207 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.143) 0:00:27.271 *********** 2025-06-02 17:30:46.251340 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:46.252338 | orchestrator | 2025-06-02 17:30:46.253168 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:30:46.254745 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.136) 0:00:27.407 *********** 2025-06-02 17:30:46.389086 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:46.390480 | orchestrator | 2025-06-02 17:30:46.391464 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:30:46.392240 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.139) 0:00:27.547 *********** 2025-06-02 17:30:46.610299 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}}) 2025-06-02 17:30:46.610997 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '26d332e8-3a94-5f56-adf2-82846ed63b84'}}) 2025-06-02 17:30:46.611647 | orchestrator | 2025-06-02 17:30:46.612123 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:30:46.612592 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.221) 0:00:27.769 *********** 2025-06-02 17:30:46.773122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}})  2025-06-02 17:30:46.775543 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '26d332e8-3a94-5f56-adf2-82846ed63b84'}})  2025-06-02 17:30:46.777282 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:46.778973 | orchestrator | 2025-06-02 17:30:46.779738 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:30:46.780405 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.162) 0:00:27.931 *********** 2025-06-02 17:30:46.932635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}})  2025-06-02 17:30:46.933225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '26d332e8-3a94-5f56-adf2-82846ed63b84'}})  2025-06-02 17:30:46.934467 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:46.935764 | orchestrator | 2025-06-02 17:30:46.936373 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:30:46.936924 | orchestrator | Monday 02 June 2025 17:30:46 +0000 (0:00:00.159) 0:00:28.090 *********** 2025-06-02 17:30:47.088005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}})  2025-06-02 17:30:47.089416 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '26d332e8-3a94-5f56-adf2-82846ed63b84'}})  2025-06-02 17:30:47.091827 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:47.092639 | orchestrator | 2025-06-02 17:30:47.093979 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:30:47.094733 | orchestrator | Monday 02 June 2025 17:30:47 +0000 (0:00:00.154) 0:00:28.245 *********** 2025-06-02 17:30:47.222160 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:47.222984 | orchestrator | 2025-06-02 17:30:47.224949 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:30:47.226311 | orchestrator | Monday 02 June 2025 17:30:47 +0000 (0:00:00.134) 0:00:28.380 *********** 2025-06-02 17:30:47.374524 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:30:47.375404 | orchestrator | 2025-06-02 17:30:47.376193 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:30:47.378414 | orchestrator | Monday 02 June 2025 17:30:47 +0000 (0:00:00.152) 0:00:28.532 *********** 2025-06-02 17:30:47.521339 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:47.523134 | orchestrator | 2025-06-02 17:30:47.524287 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:30:47.525439 | orchestrator | Monday 02 June 2025 17:30:47 +0000 (0:00:00.146) 0:00:28.679 *********** 2025-06-02 17:30:47.869411 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:47.870103 | orchestrator | 2025-06-02 17:30:47.871700 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:30:47.872448 | orchestrator | Monday 02 June 2025 17:30:47 +0000 (0:00:00.347) 0:00:29.026 *********** 2025-06-02 17:30:48.056975 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:48.057675 | orchestrator | 2025-06-02 17:30:48.061152 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:30:48.061208 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.187) 0:00:29.214 *********** 2025-06-02 17:30:48.201346 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:30:48.203348 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:30:48.203845 | orchestrator |  "sdb": { 2025-06-02 17:30:48.204451 | orchestrator |  "osd_lvm_uuid": "428bf6aa-16e8-529e-a7f6-02fc5b7007d7" 2025-06-02 17:30:48.204865 | orchestrator |  }, 2025-06-02 17:30:48.204942 | orchestrator |  "sdc": { 2025-06-02 17:30:48.206722 | orchestrator |  "osd_lvm_uuid": "26d332e8-3a94-5f56-adf2-82846ed63b84" 2025-06-02 17:30:48.206787 | orchestrator |  } 2025-06-02 17:30:48.206806 | orchestrator |  } 2025-06-02 17:30:48.206917 | orchestrator | } 2025-06-02 17:30:48.207282 | orchestrator | 2025-06-02 17:30:48.207638 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:30:48.208023 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.144) 0:00:29.358 *********** 2025-06-02 17:30:48.347212 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:48.347829 | orchestrator | 2025-06-02 17:30:48.349015 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:30:48.350128 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.147) 0:00:29.505 *********** 2025-06-02 17:30:48.481904 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:48.483694 | orchestrator | 2025-06-02 17:30:48.484852 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:30:48.485642 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.133) 0:00:29.639 *********** 2025-06-02 17:30:48.621689 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:30:48.622254 | orchestrator | 2025-06-02 17:30:48.623527 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:30:48.623553 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.139) 0:00:29.778 *********** 2025-06-02 17:30:48.888341 | orchestrator | changed: [testbed-node-4] => { 2025-06-02 17:30:48.889604 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:30:48.892372 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:30:48.892871 | orchestrator |  "sdb": { 2025-06-02 17:30:48.893238 | orchestrator |  "osd_lvm_uuid": "428bf6aa-16e8-529e-a7f6-02fc5b7007d7" 2025-06-02 17:30:48.894899 | orchestrator |  }, 2025-06-02 17:30:48.896245 | orchestrator |  "sdc": { 2025-06-02 17:30:48.898152 | orchestrator |  "osd_lvm_uuid": "26d332e8-3a94-5f56-adf2-82846ed63b84" 2025-06-02 17:30:48.899113 | orchestrator |  } 2025-06-02 17:30:48.899855 | orchestrator |  }, 2025-06-02 17:30:48.901083 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:30:48.901944 | orchestrator |  { 2025-06-02 17:30:48.902412 | orchestrator |  "data": "osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7", 2025-06-02 17:30:48.904389 | orchestrator |  "data_vg": "ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7" 2025-06-02 17:30:48.904412 | orchestrator |  }, 2025-06-02 17:30:48.905664 | orchestrator |  { 2025-06-02 17:30:48.906516 | orchestrator |  "data": "osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84", 2025-06-02 17:30:48.907405 | orchestrator |  "data_vg": "ceph-26d332e8-3a94-5f56-adf2-82846ed63b84" 2025-06-02 17:30:48.908079 | orchestrator |  } 2025-06-02 17:30:48.908963 | orchestrator |  ] 2025-06-02 17:30:48.910000 | orchestrator |  } 2025-06-02 17:30:48.910800 | orchestrator | } 2025-06-02 17:30:48.912169 | orchestrator | 2025-06-02 17:30:48.914089 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:30:48.914170 | orchestrator | Monday 02 June 2025 17:30:48 +0000 (0:00:00.265) 0:00:30.044 *********** 2025-06-02 17:30:50.075493 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:50.076651 | orchestrator | 2025-06-02 17:30:50.078705 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-02 17:30:50.080201 | orchestrator | 2025-06-02 17:30:50.081564 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:30:50.082474 | orchestrator | Monday 02 June 2025 17:30:50 +0000 (0:00:01.186) 0:00:31.231 *********** 2025-06-02 17:30:50.582369 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:30:50.583166 | orchestrator | 2025-06-02 17:30:50.584155 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:30:50.585193 | orchestrator | Monday 02 June 2025 17:30:50 +0000 (0:00:00.508) 0:00:31.739 *********** 2025-06-02 17:30:51.288848 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:30:51.289184 | orchestrator | 2025-06-02 17:30:51.290544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:51.292142 | orchestrator | Monday 02 June 2025 17:30:51 +0000 (0:00:00.704) 0:00:32.444 *********** 2025-06-02 17:30:51.686087 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:30:51.687040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:30:51.688102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:30:51.688811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:30:51.689853 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:30:51.690667 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:30:51.691030 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:30:51.691811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:30:51.692149 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 17:30:51.692399 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:30:51.692863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:30:51.693218 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:30:51.693714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:30:51.694218 | orchestrator | 2025-06-02 17:30:51.694470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:51.694879 | orchestrator | Monday 02 June 2025 17:30:51 +0000 (0:00:00.398) 0:00:32.842 *********** 2025-06-02 17:30:51.886976 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:51.890275 | orchestrator | 2025-06-02 17:30:51.891276 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:51.892482 | orchestrator | Monday 02 June 2025 17:30:51 +0000 (0:00:00.201) 0:00:33.044 *********** 2025-06-02 17:30:52.093043 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:52.094470 | orchestrator | 2025-06-02 17:30:52.095461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:52.096154 | orchestrator | Monday 02 June 2025 17:30:52 +0000 (0:00:00.206) 0:00:33.250 *********** 2025-06-02 17:30:52.304253 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:52.305475 | orchestrator | 2025-06-02 17:30:52.306117 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:52.307179 | orchestrator | Monday 02 June 2025 17:30:52 +0000 (0:00:00.212) 0:00:33.462 *********** 2025-06-02 17:30:52.519951 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:52.521091 | orchestrator | 2025-06-02 17:30:52.522324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:52.523335 | orchestrator | Monday 02 June 2025 17:30:52 +0000 (0:00:00.213) 0:00:33.676 *********** 2025-06-02 17:30:52.728394 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:52.729009 | orchestrator | 2025-06-02 17:30:52.730171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:52.731167 | orchestrator | Monday 02 June 2025 17:30:52 +0000 (0:00:00.208) 0:00:33.885 *********** 2025-06-02 17:30:52.956637 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:52.959440 | orchestrator | 2025-06-02 17:30:52.959558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:52.959670 | orchestrator | Monday 02 June 2025 17:30:52 +0000 (0:00:00.227) 0:00:34.113 *********** 2025-06-02 17:30:53.160333 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:53.160441 | orchestrator | 2025-06-02 17:30:53.161794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:53.162816 | orchestrator | Monday 02 June 2025 17:30:53 +0000 (0:00:00.205) 0:00:34.318 *********** 2025-06-02 17:30:53.360225 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:53.361225 | orchestrator | 2025-06-02 17:30:53.362897 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:53.365336 | orchestrator | Monday 02 June 2025 17:30:53 +0000 (0:00:00.199) 0:00:34.518 *********** 2025-06-02 17:30:54.026847 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6) 2025-06-02 17:30:54.029298 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6) 2025-06-02 17:30:54.030439 | orchestrator | 2025-06-02 17:30:54.030831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:54.034826 | orchestrator | Monday 02 June 2025 17:30:54 +0000 (0:00:00.666) 0:00:35.184 *********** 2025-06-02 17:30:54.905048 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d) 2025-06-02 17:30:54.905273 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d) 2025-06-02 17:30:54.906396 | orchestrator | 2025-06-02 17:30:54.909110 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:54.910485 | orchestrator | Monday 02 June 2025 17:30:54 +0000 (0:00:00.877) 0:00:36.062 *********** 2025-06-02 17:30:55.345694 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd) 2025-06-02 17:30:55.347061 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd) 2025-06-02 17:30:55.347642 | orchestrator | 2025-06-02 17:30:55.349263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:55.349287 | orchestrator | Monday 02 June 2025 17:30:55 +0000 (0:00:00.441) 0:00:36.503 *********** 2025-06-02 17:30:56.099556 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6) 2025-06-02 17:30:56.100232 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6) 2025-06-02 17:30:56.100773 | orchestrator | 2025-06-02 17:30:56.101365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:30:56.103012 | orchestrator | Monday 02 June 2025 17:30:56 +0000 (0:00:00.753) 0:00:37.257 *********** 2025-06-02 17:30:56.449689 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:30:56.449963 | orchestrator | 2025-06-02 17:30:56.450126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:56.450893 | orchestrator | Monday 02 June 2025 17:30:56 +0000 (0:00:00.351) 0:00:37.608 *********** 2025-06-02 17:30:56.888341 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:30:56.888959 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:30:56.889649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:30:56.890768 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:30:56.891469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:30:56.892114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:30:56.895076 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:30:56.895933 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:30:56.896340 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 17:30:56.897333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:30:56.897868 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:30:56.898690 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:30:56.899138 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:30:56.899676 | orchestrator | 2025-06-02 17:30:56.900494 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:56.901108 | orchestrator | Monday 02 June 2025 17:30:56 +0000 (0:00:00.437) 0:00:38.045 *********** 2025-06-02 17:30:57.132764 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:57.133270 | orchestrator | 2025-06-02 17:30:57.134451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:57.134988 | orchestrator | Monday 02 June 2025 17:30:57 +0000 (0:00:00.243) 0:00:38.289 *********** 2025-06-02 17:30:57.442983 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:57.443086 | orchestrator | 2025-06-02 17:30:57.445066 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:57.446656 | orchestrator | Monday 02 June 2025 17:30:57 +0000 (0:00:00.307) 0:00:38.597 *********** 2025-06-02 17:30:57.656912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:57.658003 | orchestrator | 2025-06-02 17:30:57.658876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:57.660055 | orchestrator | Monday 02 June 2025 17:30:57 +0000 (0:00:00.216) 0:00:38.814 *********** 2025-06-02 17:30:57.859849 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:57.860390 | orchestrator | 2025-06-02 17:30:57.861219 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:57.864538 | orchestrator | Monday 02 June 2025 17:30:57 +0000 (0:00:00.203) 0:00:39.017 *********** 2025-06-02 17:30:58.115948 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:58.116185 | orchestrator | 2025-06-02 17:30:58.118536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:58.119312 | orchestrator | Monday 02 June 2025 17:30:58 +0000 (0:00:00.254) 0:00:39.272 *********** 2025-06-02 17:30:59.030741 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:59.033244 | orchestrator | 2025-06-02 17:30:59.036291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:59.036327 | orchestrator | Monday 02 June 2025 17:30:59 +0000 (0:00:00.916) 0:00:40.189 *********** 2025-06-02 17:30:59.267883 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:59.268757 | orchestrator | 2025-06-02 17:30:59.270653 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:59.270988 | orchestrator | Monday 02 June 2025 17:30:59 +0000 (0:00:00.236) 0:00:40.425 *********** 2025-06-02 17:30:59.499254 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:30:59.499418 | orchestrator | 2025-06-02 17:30:59.500951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:30:59.501805 | orchestrator | Monday 02 June 2025 17:30:59 +0000 (0:00:00.231) 0:00:40.657 *********** 2025-06-02 17:31:00.242825 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 17:31:00.244854 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 17:31:00.245905 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 17:31:00.246781 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 17:31:00.247785 | orchestrator | 2025-06-02 17:31:00.248533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:00.249849 | orchestrator | Monday 02 June 2025 17:31:00 +0000 (0:00:00.738) 0:00:41.396 *********** 2025-06-02 17:31:00.489955 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:00.491289 | orchestrator | 2025-06-02 17:31:00.492636 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:00.493208 | orchestrator | Monday 02 June 2025 17:31:00 +0000 (0:00:00.249) 0:00:41.646 *********** 2025-06-02 17:31:00.698138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:00.699251 | orchestrator | 2025-06-02 17:31:00.700176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:00.701820 | orchestrator | Monday 02 June 2025 17:31:00 +0000 (0:00:00.209) 0:00:41.856 *********** 2025-06-02 17:31:00.923903 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:00.924125 | orchestrator | 2025-06-02 17:31:00.926764 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:00.926797 | orchestrator | Monday 02 June 2025 17:31:00 +0000 (0:00:00.225) 0:00:42.081 *********** 2025-06-02 17:31:01.126404 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:01.127351 | orchestrator | 2025-06-02 17:31:01.129437 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-02 17:31:01.130672 | orchestrator | Monday 02 June 2025 17:31:01 +0000 (0:00:00.201) 0:00:42.282 *********** 2025-06-02 17:31:01.324799 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-02 17:31:01.324896 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-02 17:31:01.324972 | orchestrator | 2025-06-02 17:31:01.328732 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-02 17:31:01.329430 | orchestrator | Monday 02 June 2025 17:31:01 +0000 (0:00:00.198) 0:00:42.481 *********** 2025-06-02 17:31:01.468437 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:01.471542 | orchestrator | 2025-06-02 17:31:01.472614 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-02 17:31:01.473778 | orchestrator | Monday 02 June 2025 17:31:01 +0000 (0:00:00.144) 0:00:42.625 *********** 2025-06-02 17:31:01.608424 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:01.608813 | orchestrator | 2025-06-02 17:31:01.611009 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-02 17:31:01.612251 | orchestrator | Monday 02 June 2025 17:31:01 +0000 (0:00:00.140) 0:00:42.765 *********** 2025-06-02 17:31:01.757188 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:01.757796 | orchestrator | 2025-06-02 17:31:01.757971 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-02 17:31:01.758329 | orchestrator | Monday 02 June 2025 17:31:01 +0000 (0:00:00.145) 0:00:42.911 *********** 2025-06-02 17:31:02.129603 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:31:02.130284 | orchestrator | 2025-06-02 17:31:02.132066 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-02 17:31:02.132120 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.374) 0:00:43.286 *********** 2025-06-02 17:31:02.320268 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7944d10b-922c-5cd9-bd54-91ce5496d9bc'}}) 2025-06-02 17:31:02.321407 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '455b12e9-4014-57cf-aec2-de5d805a7d14'}}) 2025-06-02 17:31:02.322719 | orchestrator | 2025-06-02 17:31:02.324407 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-02 17:31:02.324430 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.192) 0:00:43.478 *********** 2025-06-02 17:31:02.484253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7944d10b-922c-5cd9-bd54-91ce5496d9bc'}})  2025-06-02 17:31:02.484768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '455b12e9-4014-57cf-aec2-de5d805a7d14'}})  2025-06-02 17:31:02.485733 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:02.486483 | orchestrator | 2025-06-02 17:31:02.488327 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-02 17:31:02.488902 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.163) 0:00:43.642 *********** 2025-06-02 17:31:02.650071 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7944d10b-922c-5cd9-bd54-91ce5496d9bc'}})  2025-06-02 17:31:02.650277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '455b12e9-4014-57cf-aec2-de5d805a7d14'}})  2025-06-02 17:31:02.651157 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:02.652225 | orchestrator | 2025-06-02 17:31:02.652951 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-02 17:31:02.653898 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.165) 0:00:43.807 *********** 2025-06-02 17:31:02.807715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7944d10b-922c-5cd9-bd54-91ce5496d9bc'}})  2025-06-02 17:31:02.807927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '455b12e9-4014-57cf-aec2-de5d805a7d14'}})  2025-06-02 17:31:02.808779 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:02.809429 | orchestrator | 2025-06-02 17:31:02.811472 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-02 17:31:02.811514 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.157) 0:00:43.965 *********** 2025-06-02 17:31:02.935837 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:31:02.936377 | orchestrator | 2025-06-02 17:31:02.937513 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-02 17:31:02.938528 | orchestrator | Monday 02 June 2025 17:31:02 +0000 (0:00:00.128) 0:00:44.093 *********** 2025-06-02 17:31:03.070197 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:31:03.070681 | orchestrator | 2025-06-02 17:31:03.072095 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-02 17:31:03.072671 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.134) 0:00:44.228 *********** 2025-06-02 17:31:03.194194 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:03.195352 | orchestrator | 2025-06-02 17:31:03.197164 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-02 17:31:03.197300 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.123) 0:00:44.351 *********** 2025-06-02 17:31:03.333306 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:03.333916 | orchestrator | 2025-06-02 17:31:03.334538 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-02 17:31:03.336920 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.139) 0:00:44.490 *********** 2025-06-02 17:31:03.480484 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:03.480794 | orchestrator | 2025-06-02 17:31:03.482076 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-02 17:31:03.483163 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.147) 0:00:44.638 *********** 2025-06-02 17:31:03.626459 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:31:03.626793 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:31:03.627521 | orchestrator |  "sdb": { 2025-06-02 17:31:03.629686 | orchestrator |  "osd_lvm_uuid": "7944d10b-922c-5cd9-bd54-91ce5496d9bc" 2025-06-02 17:31:03.630882 | orchestrator |  }, 2025-06-02 17:31:03.632043 | orchestrator |  "sdc": { 2025-06-02 17:31:03.632719 | orchestrator |  "osd_lvm_uuid": "455b12e9-4014-57cf-aec2-de5d805a7d14" 2025-06-02 17:31:03.633771 | orchestrator |  } 2025-06-02 17:31:03.634443 | orchestrator |  } 2025-06-02 17:31:03.635116 | orchestrator | } 2025-06-02 17:31:03.635854 | orchestrator | 2025-06-02 17:31:03.636791 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-02 17:31:03.637138 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.144) 0:00:44.783 *********** 2025-06-02 17:31:03.771936 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:03.773296 | orchestrator | 2025-06-02 17:31:03.775494 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-02 17:31:03.775525 | orchestrator | Monday 02 June 2025 17:31:03 +0000 (0:00:00.146) 0:00:44.929 *********** 2025-06-02 17:31:04.154959 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:04.155369 | orchestrator | 2025-06-02 17:31:04.156305 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-02 17:31:04.157069 | orchestrator | Monday 02 June 2025 17:31:04 +0000 (0:00:00.379) 0:00:45.309 *********** 2025-06-02 17:31:04.291190 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:31:04.292156 | orchestrator | 2025-06-02 17:31:04.292835 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-02 17:31:04.293953 | orchestrator | Monday 02 June 2025 17:31:04 +0000 (0:00:00.139) 0:00:45.448 *********** 2025-06-02 17:31:04.505258 | orchestrator | changed: [testbed-node-5] => { 2025-06-02 17:31:04.505342 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-02 17:31:04.506309 | orchestrator |  "ceph_osd_devices": { 2025-06-02 17:31:04.507635 | orchestrator |  "sdb": { 2025-06-02 17:31:04.508537 | orchestrator |  "osd_lvm_uuid": "7944d10b-922c-5cd9-bd54-91ce5496d9bc" 2025-06-02 17:31:04.509291 | orchestrator |  }, 2025-06-02 17:31:04.509953 | orchestrator |  "sdc": { 2025-06-02 17:31:04.510725 | orchestrator |  "osd_lvm_uuid": "455b12e9-4014-57cf-aec2-de5d805a7d14" 2025-06-02 17:31:04.512157 | orchestrator |  } 2025-06-02 17:31:04.512885 | orchestrator |  }, 2025-06-02 17:31:04.513281 | orchestrator |  "lvm_volumes": [ 2025-06-02 17:31:04.514124 | orchestrator |  { 2025-06-02 17:31:04.514418 | orchestrator |  "data": "osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc", 2025-06-02 17:31:04.515982 | orchestrator |  "data_vg": "ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc" 2025-06-02 17:31:04.516579 | orchestrator |  }, 2025-06-02 17:31:04.517768 | orchestrator |  { 2025-06-02 17:31:04.519054 | orchestrator |  "data": "osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14", 2025-06-02 17:31:04.520368 | orchestrator |  "data_vg": "ceph-455b12e9-4014-57cf-aec2-de5d805a7d14" 2025-06-02 17:31:04.521396 | orchestrator |  } 2025-06-02 17:31:04.526063 | orchestrator |  ] 2025-06-02 17:31:04.526743 | orchestrator |  } 2025-06-02 17:31:04.527768 | orchestrator | } 2025-06-02 17:31:04.528593 | orchestrator | 2025-06-02 17:31:04.529141 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-02 17:31:04.530140 | orchestrator | Monday 02 June 2025 17:31:04 +0000 (0:00:00.212) 0:00:45.661 *********** 2025-06-02 17:31:05.543524 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:31:05.544662 | orchestrator | 2025-06-02 17:31:05.546705 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:31:05.546774 | orchestrator | 2025-06-02 17:31:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:31:05.546789 | orchestrator | 2025-06-02 17:31:05 | INFO  | Please wait and do not abort execution. 2025-06-02 17:31:05.547684 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:31:05.548879 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:31:05.550002 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:31:05.551439 | orchestrator | 2025-06-02 17:31:05.552848 | orchestrator | 2025-06-02 17:31:05.555078 | orchestrator | 2025-06-02 17:31:05.556261 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:31:05.556863 | orchestrator | Monday 02 June 2025 17:31:05 +0000 (0:00:01.038) 0:00:46.699 *********** 2025-06-02 17:31:05.557786 | orchestrator | =============================================================================== 2025-06-02 17:31:05.558711 | orchestrator | Write configuration file ------------------------------------------------ 4.75s 2025-06-02 17:31:05.559221 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2025-06-02 17:31:05.559843 | orchestrator | Get initial list of available block devices ----------------------------- 1.21s 2025-06-02 17:31:05.561613 | orchestrator | Add known links to the list of available block devices ------------------ 1.18s 2025-06-02 17:31:05.562817 | orchestrator | Add known partitions to the list of available block devices ------------- 1.12s 2025-06-02 17:31:05.563946 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.06s 2025-06-02 17:31:05.565241 | orchestrator | Add known partitions to the list of available block devices ------------- 0.92s 2025-06-02 17:31:05.565734 | orchestrator | Add known links to the list of available block devices ------------------ 0.88s 2025-06-02 17:31:05.566274 | orchestrator | Add known links to the list of available block devices ------------------ 0.86s 2025-06-02 17:31:05.566951 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.83s 2025-06-02 17:31:05.567375 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-06-02 17:31:05.568478 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-06-02 17:31:05.568973 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-06-02 17:31:05.569436 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-06-02 17:31:05.569923 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-06-02 17:31:05.570531 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.70s 2025-06-02 17:31:05.571019 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.70s 2025-06-02 17:31:05.572393 | orchestrator | Print configuration data ------------------------------------------------ 0.67s 2025-06-02 17:31:05.572415 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-06-02 17:31:05.573413 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-02 17:31:18.131911 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:31:18.131978 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:31:18.131990 | orchestrator | Registering Redlock._release_script 2025-06-02 17:31:18.196599 | orchestrator | 2025-06-02 17:31:18 | INFO  | Task 6ed504f0-218f-4afd-9cfe-29657317b9f1 (sync inventory) is running in background. Output coming soon. 2025-06-02 17:31:47.541075 | orchestrator | 2025-06-02 17:31:28 | INFO  | Starting group_vars file reorganization 2025-06-02 17:31:47.541186 | orchestrator | 2025-06-02 17:31:28 | INFO  | Moved 0 file(s) to their respective directories 2025-06-02 17:31:47.541202 | orchestrator | 2025-06-02 17:31:28 | INFO  | Group_vars file reorganization completed 2025-06-02 17:31:47.541214 | orchestrator | 2025-06-02 17:31:30 | INFO  | Starting variable preparation from inventory 2025-06-02 17:31:47.541225 | orchestrator | 2025-06-02 17:31:32 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-02 17:31:47.541236 | orchestrator | 2025-06-02 17:31:32 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-02 17:31:47.541273 | orchestrator | 2025-06-02 17:31:32 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-02 17:31:47.541285 | orchestrator | 2025-06-02 17:31:32 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-02 17:31:47.541296 | orchestrator | 2025-06-02 17:31:32 | INFO  | Variable preparation completed: 2025-06-02 17:31:47.541307 | orchestrator | 2025-06-02 17:31:33 | INFO  | Starting inventory overwrite handling 2025-06-02 17:31:47.541317 | orchestrator | 2025-06-02 17:31:33 | INFO  | Handling group overwrites in 99-overwrite 2025-06-02 17:31:47.541328 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group frr:children from 60-generic 2025-06-02 17:31:47.541339 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group storage:children from 50-kolla 2025-06-02 17:31:47.541349 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-02 17:31:47.541369 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-02 17:31:47.541380 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-02 17:31:47.541391 | orchestrator | 2025-06-02 17:31:33 | INFO  | Handling group overwrites in 20-roles 2025-06-02 17:31:47.541401 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-02 17:31:47.541412 | orchestrator | 2025-06-02 17:31:33 | INFO  | Removed 6 group(s) in total 2025-06-02 17:31:47.541423 | orchestrator | 2025-06-02 17:31:33 | INFO  | Inventory overwrite handling completed 2025-06-02 17:31:47.541433 | orchestrator | 2025-06-02 17:31:34 | INFO  | Starting merge of inventory files 2025-06-02 17:31:47.541443 | orchestrator | 2025-06-02 17:31:34 | INFO  | Inventory files merged successfully 2025-06-02 17:31:47.541454 | orchestrator | 2025-06-02 17:31:38 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-02 17:31:47.541465 | orchestrator | 2025-06-02 17:31:45 | INFO  | Successfully wrote ClusterShell configuration 2025-06-02 17:31:49.320344 | orchestrator | 2025-06-02 17:31:49 | INFO  | Task 7cbc36f2-ed70-4b1b-b819-8a79f8f1825d (ceph-create-lvm-devices) was prepared for execution. 2025-06-02 17:31:49.320415 | orchestrator | 2025-06-02 17:31:49 | INFO  | It takes a moment until task 7cbc36f2-ed70-4b1b-b819-8a79f8f1825d (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-02 17:31:53.638822 | orchestrator | 2025-06-02 17:31:53.639924 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:31:53.639966 | orchestrator | 2025-06-02 17:31:53.640959 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:31:53.642455 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.323) 0:00:00.323 *********** 2025-06-02 17:31:53.879692 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 17:31:53.879838 | orchestrator | 2025-06-02 17:31:53.881177 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:31:53.882219 | orchestrator | Monday 02 June 2025 17:31:53 +0000 (0:00:00.244) 0:00:00.567 *********** 2025-06-02 17:31:54.103917 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:31:54.104854 | orchestrator | 2025-06-02 17:31:54.105921 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:54.107233 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.224) 0:00:00.792 *********** 2025-06-02 17:31:54.515064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:31:54.515228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:31:54.516431 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:31:54.517619 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:31:54.518933 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:31:54.519516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:31:54.520303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:31:54.521222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:31:54.521959 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-02 17:31:54.522495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:31:54.523482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:31:54.523764 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:31:54.524248 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:31:54.524976 | orchestrator | 2025-06-02 17:31:54.525859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:54.527497 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.411) 0:00:01.203 *********** 2025-06-02 17:31:54.978741 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:54.979982 | orchestrator | 2025-06-02 17:31:54.981377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:54.982477 | orchestrator | Monday 02 June 2025 17:31:54 +0000 (0:00:00.461) 0:00:01.664 *********** 2025-06-02 17:31:55.182087 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:55.183322 | orchestrator | 2025-06-02 17:31:55.183433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.184716 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.203) 0:00:01.868 *********** 2025-06-02 17:31:55.373288 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:55.373428 | orchestrator | 2025-06-02 17:31:55.373509 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.374105 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.192) 0:00:02.061 *********** 2025-06-02 17:31:55.579226 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:55.580162 | orchestrator | 2025-06-02 17:31:55.582169 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.582218 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.206) 0:00:02.267 *********** 2025-06-02 17:31:55.782319 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:55.782979 | orchestrator | 2025-06-02 17:31:55.784621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.784792 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.202) 0:00:02.469 *********** 2025-06-02 17:31:55.993427 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:55.994123 | orchestrator | 2025-06-02 17:31:55.996995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:55.997467 | orchestrator | Monday 02 June 2025 17:31:55 +0000 (0:00:00.212) 0:00:02.681 *********** 2025-06-02 17:31:56.199489 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:56.202364 | orchestrator | 2025-06-02 17:31:56.202398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.203276 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.205) 0:00:02.887 *********** 2025-06-02 17:31:56.402703 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:56.404639 | orchestrator | 2025-06-02 17:31:56.407013 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.407367 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.203) 0:00:03.091 *********** 2025-06-02 17:31:56.820860 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12) 2025-06-02 17:31:56.822655 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12) 2025-06-02 17:31:56.822979 | orchestrator | 2025-06-02 17:31:56.824991 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:56.825676 | orchestrator | Monday 02 June 2025 17:31:56 +0000 (0:00:00.417) 0:00:03.508 *********** 2025-06-02 17:31:57.233935 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a) 2025-06-02 17:31:57.234299 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a) 2025-06-02 17:31:57.235830 | orchestrator | 2025-06-02 17:31:57.237106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:57.237505 | orchestrator | Monday 02 June 2025 17:31:57 +0000 (0:00:00.414) 0:00:03.922 *********** 2025-06-02 17:31:57.864489 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140) 2025-06-02 17:31:57.865285 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140) 2025-06-02 17:31:57.866486 | orchestrator | 2025-06-02 17:31:57.867261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:57.868668 | orchestrator | Monday 02 June 2025 17:31:57 +0000 (0:00:00.629) 0:00:04.551 *********** 2025-06-02 17:31:58.526813 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f) 2025-06-02 17:31:58.527198 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f) 2025-06-02 17:31:58.527555 | orchestrator | 2025-06-02 17:31:58.528346 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:31:58.529189 | orchestrator | Monday 02 June 2025 17:31:58 +0000 (0:00:00.662) 0:00:05.214 *********** 2025-06-02 17:31:59.322587 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:31:59.323501 | orchestrator | 2025-06-02 17:31:59.325685 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:59.326176 | orchestrator | Monday 02 June 2025 17:31:59 +0000 (0:00:00.795) 0:00:06.009 *********** 2025-06-02 17:31:59.734091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-02 17:31:59.734502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-02 17:31:59.736113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-02 17:31:59.737176 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-02 17:31:59.738093 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-02 17:31:59.738957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-02 17:31:59.739706 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-02 17:31:59.740143 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-02 17:31:59.740672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-02 17:31:59.740983 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-02 17:31:59.741364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-02 17:31:59.741954 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-02 17:31:59.742329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-02 17:31:59.742833 | orchestrator | 2025-06-02 17:31:59.743364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:59.744219 | orchestrator | Monday 02 June 2025 17:31:59 +0000 (0:00:00.412) 0:00:06.421 *********** 2025-06-02 17:31:59.936855 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:31:59.939439 | orchestrator | 2025-06-02 17:31:59.939602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:31:59.939707 | orchestrator | Monday 02 June 2025 17:31:59 +0000 (0:00:00.200) 0:00:06.622 *********** 2025-06-02 17:32:00.133465 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:00.135956 | orchestrator | 2025-06-02 17:32:00.136803 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.137277 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.198) 0:00:06.821 *********** 2025-06-02 17:32:00.339690 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:00.340892 | orchestrator | 2025-06-02 17:32:00.341842 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.343672 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.206) 0:00:07.028 *********** 2025-06-02 17:32:00.531444 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:00.531877 | orchestrator | 2025-06-02 17:32:00.533052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.533671 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.192) 0:00:07.220 *********** 2025-06-02 17:32:00.746446 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:00.747477 | orchestrator | 2025-06-02 17:32:00.748441 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.748772 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.214) 0:00:07.435 *********** 2025-06-02 17:32:00.969493 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:00.971260 | orchestrator | 2025-06-02 17:32:00.971497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:00.971558 | orchestrator | Monday 02 June 2025 17:32:00 +0000 (0:00:00.221) 0:00:07.656 *********** 2025-06-02 17:32:01.186182 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:01.187258 | orchestrator | 2025-06-02 17:32:01.187289 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:01.187951 | orchestrator | Monday 02 June 2025 17:32:01 +0000 (0:00:00.217) 0:00:07.874 *********** 2025-06-02 17:32:01.381493 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:01.382834 | orchestrator | 2025-06-02 17:32:01.384275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:01.385246 | orchestrator | Monday 02 June 2025 17:32:01 +0000 (0:00:00.194) 0:00:08.069 *********** 2025-06-02 17:32:02.489867 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-02 17:32:02.490285 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-02 17:32:02.491512 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-02 17:32:02.492428 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-02 17:32:02.492950 | orchestrator | 2025-06-02 17:32:02.493809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.494782 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:01.107) 0:00:09.176 *********** 2025-06-02 17:32:02.692898 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:02.692998 | orchestrator | 2025-06-02 17:32:02.694585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.694926 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.204) 0:00:09.381 *********** 2025-06-02 17:32:02.920627 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:02.921407 | orchestrator | 2025-06-02 17:32:02.922144 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:02.922949 | orchestrator | Monday 02 June 2025 17:32:02 +0000 (0:00:00.225) 0:00:09.606 *********** 2025-06-02 17:32:03.112007 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:03.112342 | orchestrator | 2025-06-02 17:32:03.113240 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:03.114235 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.193) 0:00:09.800 *********** 2025-06-02 17:32:03.306214 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:03.307194 | orchestrator | 2025-06-02 17:32:03.307335 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:32:03.308203 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.194) 0:00:09.994 *********** 2025-06-02 17:32:03.440028 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:03.441019 | orchestrator | 2025-06-02 17:32:03.443271 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:32:03.444419 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.133) 0:00:10.128 *********** 2025-06-02 17:32:03.637183 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '8450978f-95f9-56a8-b94f-b89f59985534'}}) 2025-06-02 17:32:03.637265 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '4af7f5ab-70f7-5f81-8195-4d6574833a1e'}}) 2025-06-02 17:32:03.637400 | orchestrator | 2025-06-02 17:32:03.637988 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:32:03.638455 | orchestrator | Monday 02 June 2025 17:32:03 +0000 (0:00:00.195) 0:00:10.323 *********** 2025-06-02 17:32:05.677623 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'}) 2025-06-02 17:32:05.677908 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'}) 2025-06-02 17:32:05.678751 | orchestrator | 2025-06-02 17:32:05.679770 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:32:05.680560 | orchestrator | Monday 02 June 2025 17:32:05 +0000 (0:00:02.041) 0:00:12.365 *********** 2025-06-02 17:32:05.847024 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:05.847125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:05.848042 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:05.849669 | orchestrator | 2025-06-02 17:32:05.849712 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:32:05.850281 | orchestrator | Monday 02 June 2025 17:32:05 +0000 (0:00:00.168) 0:00:12.534 *********** 2025-06-02 17:32:07.306729 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'}) 2025-06-02 17:32:07.308082 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'}) 2025-06-02 17:32:07.309167 | orchestrator | 2025-06-02 17:32:07.309647 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:32:07.311465 | orchestrator | Monday 02 June 2025 17:32:07 +0000 (0:00:01.459) 0:00:13.993 *********** 2025-06-02 17:32:07.466115 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:07.466336 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:07.466641 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:07.467681 | orchestrator | 2025-06-02 17:32:07.467710 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:32:07.468012 | orchestrator | Monday 02 June 2025 17:32:07 +0000 (0:00:00.161) 0:00:14.155 *********** 2025-06-02 17:32:07.605328 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:07.606097 | orchestrator | 2025-06-02 17:32:07.607022 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:32:07.607599 | orchestrator | Monday 02 June 2025 17:32:07 +0000 (0:00:00.138) 0:00:14.294 *********** 2025-06-02 17:32:07.984040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:07.984146 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:07.984161 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:07.984948 | orchestrator | 2025-06-02 17:32:07.986362 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:32:07.987070 | orchestrator | Monday 02 June 2025 17:32:07 +0000 (0:00:00.377) 0:00:14.671 *********** 2025-06-02 17:32:08.125248 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:08.125441 | orchestrator | 2025-06-02 17:32:08.126298 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:32:08.127302 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.141) 0:00:14.813 *********** 2025-06-02 17:32:08.312140 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:08.312846 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:08.314734 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:08.315692 | orchestrator | 2025-06-02 17:32:08.316686 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:32:08.317683 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.187) 0:00:15.000 *********** 2025-06-02 17:32:08.460265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:08.460784 | orchestrator | 2025-06-02 17:32:08.461352 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:32:08.462444 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.147) 0:00:15.148 *********** 2025-06-02 17:32:08.616836 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:08.617231 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:08.618212 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:08.618668 | orchestrator | 2025-06-02 17:32:08.619372 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:32:08.619800 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.157) 0:00:15.306 *********** 2025-06-02 17:32:08.752834 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:08.753946 | orchestrator | 2025-06-02 17:32:08.754664 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:32:08.756086 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.133) 0:00:15.439 *********** 2025-06-02 17:32:08.921435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:08.922951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:08.924518 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:08.925697 | orchestrator | 2025-06-02 17:32:08.926134 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:32:08.927306 | orchestrator | Monday 02 June 2025 17:32:08 +0000 (0:00:00.169) 0:00:15.609 *********** 2025-06-02 17:32:09.083941 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:09.084982 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:09.087624 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:09.087651 | orchestrator | 2025-06-02 17:32:09.088640 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:32:09.089798 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.162) 0:00:15.772 *********** 2025-06-02 17:32:09.241293 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:09.243147 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:09.245190 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:09.245653 | orchestrator | 2025-06-02 17:32:09.247105 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:32:09.247761 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.157) 0:00:15.929 *********** 2025-06-02 17:32:09.400451 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:09.401259 | orchestrator | 2025-06-02 17:32:09.402237 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:32:09.402950 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.158) 0:00:16.088 *********** 2025-06-02 17:32:09.559330 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:09.561821 | orchestrator | 2025-06-02 17:32:09.561862 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:32:09.562884 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.158) 0:00:16.246 *********** 2025-06-02 17:32:09.690110 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:09.692670 | orchestrator | 2025-06-02 17:32:09.694247 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:32:09.695625 | orchestrator | Monday 02 June 2025 17:32:09 +0000 (0:00:00.131) 0:00:16.378 *********** 2025-06-02 17:32:10.060060 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:32:10.060111 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:32:10.060815 | orchestrator | } 2025-06-02 17:32:10.061323 | orchestrator | 2025-06-02 17:32:10.062155 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:32:10.064207 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.370) 0:00:16.748 *********** 2025-06-02 17:32:10.221987 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:32:10.222166 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:32:10.222894 | orchestrator | } 2025-06-02 17:32:10.223807 | orchestrator | 2025-06-02 17:32:10.224192 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:32:10.224644 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.162) 0:00:16.910 *********** 2025-06-02 17:32:10.354718 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:32:10.354955 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:32:10.356304 | orchestrator | } 2025-06-02 17:32:10.357448 | orchestrator | 2025-06-02 17:32:10.360354 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:32:10.361025 | orchestrator | Monday 02 June 2025 17:32:10 +0000 (0:00:00.131) 0:00:17.042 *********** 2025-06-02 17:32:11.021308 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:11.022579 | orchestrator | 2025-06-02 17:32:11.023040 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:32:11.023834 | orchestrator | Monday 02 June 2025 17:32:11 +0000 (0:00:00.666) 0:00:17.709 *********** 2025-06-02 17:32:11.530187 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:11.530749 | orchestrator | 2025-06-02 17:32:11.532368 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:32:11.533363 | orchestrator | Monday 02 June 2025 17:32:11 +0000 (0:00:00.507) 0:00:18.217 *********** 2025-06-02 17:32:12.044357 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:12.045109 | orchestrator | 2025-06-02 17:32:12.046292 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:32:12.046605 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.515) 0:00:18.732 *********** 2025-06-02 17:32:12.205837 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:12.206077 | orchestrator | 2025-06-02 17:32:12.207428 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:32:12.208723 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.161) 0:00:18.894 *********** 2025-06-02 17:32:12.325877 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:12.326844 | orchestrator | 2025-06-02 17:32:12.327512 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:32:12.329230 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.119) 0:00:19.014 *********** 2025-06-02 17:32:12.452956 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:12.453512 | orchestrator | 2025-06-02 17:32:12.454249 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:32:12.454447 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.127) 0:00:19.141 *********** 2025-06-02 17:32:12.601544 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:32:12.603638 | orchestrator |  "vgs_report": { 2025-06-02 17:32:12.606111 | orchestrator |  "vg": [] 2025-06-02 17:32:12.606156 | orchestrator |  } 2025-06-02 17:32:12.606499 | orchestrator | } 2025-06-02 17:32:12.608388 | orchestrator | 2025-06-02 17:32:12.608847 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:32:12.609941 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.149) 0:00:19.290 *********** 2025-06-02 17:32:12.736289 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:12.739733 | orchestrator | 2025-06-02 17:32:12.739805 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:32:12.742645 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.133) 0:00:19.423 *********** 2025-06-02 17:32:12.862259 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:12.863314 | orchestrator | 2025-06-02 17:32:12.864332 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:32:12.865317 | orchestrator | Monday 02 June 2025 17:32:12 +0000 (0:00:00.128) 0:00:19.551 *********** 2025-06-02 17:32:13.225869 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.226110 | orchestrator | 2025-06-02 17:32:13.227157 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:32:13.227976 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.361) 0:00:19.912 *********** 2025-06-02 17:32:13.373186 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.374004 | orchestrator | 2025-06-02 17:32:13.375016 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:32:13.376410 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.148) 0:00:20.061 *********** 2025-06-02 17:32:13.526256 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.527327 | orchestrator | 2025-06-02 17:32:13.528082 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:32:13.529100 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.151) 0:00:20.213 *********** 2025-06-02 17:32:13.664683 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.665725 | orchestrator | 2025-06-02 17:32:13.667604 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:32:13.667631 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.139) 0:00:20.352 *********** 2025-06-02 17:32:13.813583 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.813722 | orchestrator | 2025-06-02 17:32:13.813899 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:32:13.814244 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.149) 0:00:20.501 *********** 2025-06-02 17:32:13.951245 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:13.952142 | orchestrator | 2025-06-02 17:32:13.953104 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:32:13.954135 | orchestrator | Monday 02 June 2025 17:32:13 +0000 (0:00:00.138) 0:00:20.639 *********** 2025-06-02 17:32:14.076945 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.077798 | orchestrator | 2025-06-02 17:32:14.078304 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:32:14.079032 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.125) 0:00:20.765 *********** 2025-06-02 17:32:14.222915 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.224654 | orchestrator | 2025-06-02 17:32:14.224684 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:32:14.225642 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.144) 0:00:20.910 *********** 2025-06-02 17:32:14.352828 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.353726 | orchestrator | 2025-06-02 17:32:14.355308 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:32:14.356170 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.131) 0:00:21.041 *********** 2025-06-02 17:32:14.513503 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.514158 | orchestrator | 2025-06-02 17:32:14.515830 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:32:14.517583 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.158) 0:00:21.200 *********** 2025-06-02 17:32:14.661842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.662591 | orchestrator | 2025-06-02 17:32:14.663110 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:32:14.665187 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.150) 0:00:21.350 *********** 2025-06-02 17:32:14.799413 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.800227 | orchestrator | 2025-06-02 17:32:14.801216 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:32:14.802292 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.136) 0:00:21.487 *********** 2025-06-02 17:32:14.953806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:14.955102 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:14.957268 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:14.958449 | orchestrator | 2025-06-02 17:32:14.959417 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:32:14.960810 | orchestrator | Monday 02 June 2025 17:32:14 +0000 (0:00:00.155) 0:00:21.642 *********** 2025-06-02 17:32:15.337183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:15.338722 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:15.340003 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:15.341827 | orchestrator | 2025-06-02 17:32:15.342996 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:32:15.344190 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.382) 0:00:22.025 *********** 2025-06-02 17:32:15.498694 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:15.498876 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:15.499818 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:15.501296 | orchestrator | 2025-06-02 17:32:15.501495 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:32:15.502420 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.160) 0:00:22.186 *********** 2025-06-02 17:32:15.662118 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:15.662882 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:15.664902 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:15.665037 | orchestrator | 2025-06-02 17:32:15.666126 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:32:15.667087 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.163) 0:00:22.349 *********** 2025-06-02 17:32:15.812604 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:15.813464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:15.814332 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:15.815007 | orchestrator | 2025-06-02 17:32:15.816883 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:32:15.816908 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.151) 0:00:22.500 *********** 2025-06-02 17:32:15.968595 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:15.970133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:15.971625 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:15.972443 | orchestrator | 2025-06-02 17:32:15.973880 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:32:15.975080 | orchestrator | Monday 02 June 2025 17:32:15 +0000 (0:00:00.155) 0:00:22.656 *********** 2025-06-02 17:32:16.130986 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:16.131179 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:16.131503 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:16.132992 | orchestrator | 2025-06-02 17:32:16.133595 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:32:16.134285 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.163) 0:00:22.820 *********** 2025-06-02 17:32:16.291865 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:16.293287 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:16.295001 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:16.295029 | orchestrator | 2025-06-02 17:32:16.296247 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:32:16.299994 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.160) 0:00:22.980 *********** 2025-06-02 17:32:16.794118 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:16.795177 | orchestrator | 2025-06-02 17:32:16.795215 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:32:16.795866 | orchestrator | Monday 02 June 2025 17:32:16 +0000 (0:00:00.501) 0:00:23.482 *********** 2025-06-02 17:32:17.298936 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:17.299556 | orchestrator | 2025-06-02 17:32:17.300140 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:32:17.300683 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.505) 0:00:23.987 *********** 2025-06-02 17:32:17.457208 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:32:17.457302 | orchestrator | 2025-06-02 17:32:17.458115 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:32:17.458862 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.159) 0:00:24.146 *********** 2025-06-02 17:32:17.636425 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'vg_name': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'}) 2025-06-02 17:32:17.636658 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'vg_name': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'}) 2025-06-02 17:32:17.636677 | orchestrator | 2025-06-02 17:32:17.636989 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:32:17.637661 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.177) 0:00:24.324 *********** 2025-06-02 17:32:17.793806 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:17.793997 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:17.794221 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:17.796028 | orchestrator | 2025-06-02 17:32:17.798349 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:32:17.799016 | orchestrator | Monday 02 June 2025 17:32:17 +0000 (0:00:00.157) 0:00:24.482 *********** 2025-06-02 17:32:18.181958 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:18.182718 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:18.183607 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:18.184825 | orchestrator | 2025-06-02 17:32:18.185265 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:32:18.186271 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.388) 0:00:24.870 *********** 2025-06-02 17:32:18.357464 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'})  2025-06-02 17:32:18.357603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'})  2025-06-02 17:32:18.358358 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:32:18.358854 | orchestrator | 2025-06-02 17:32:18.360180 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:32:18.361166 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.175) 0:00:25.046 *********** 2025-06-02 17:32:18.676435 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:32:18.676596 | orchestrator |  "lvm_report": { 2025-06-02 17:32:18.676685 | orchestrator |  "lv": [ 2025-06-02 17:32:18.677562 | orchestrator |  { 2025-06-02 17:32:18.678561 | orchestrator |  "lv_name": "osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e", 2025-06-02 17:32:18.679569 | orchestrator |  "vg_name": "ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e" 2025-06-02 17:32:18.680586 | orchestrator |  }, 2025-06-02 17:32:18.681167 | orchestrator |  { 2025-06-02 17:32:18.681948 | orchestrator |  "lv_name": "osd-block-8450978f-95f9-56a8-b94f-b89f59985534", 2025-06-02 17:32:18.682656 | orchestrator |  "vg_name": "ceph-8450978f-95f9-56a8-b94f-b89f59985534" 2025-06-02 17:32:18.683713 | orchestrator |  } 2025-06-02 17:32:18.684570 | orchestrator |  ], 2025-06-02 17:32:18.685096 | orchestrator |  "pv": [ 2025-06-02 17:32:18.685805 | orchestrator |  { 2025-06-02 17:32:18.687182 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:32:18.687221 | orchestrator |  "vg_name": "ceph-8450978f-95f9-56a8-b94f-b89f59985534" 2025-06-02 17:32:18.687582 | orchestrator |  }, 2025-06-02 17:32:18.687722 | orchestrator |  { 2025-06-02 17:32:18.688292 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:32:18.690410 | orchestrator |  "vg_name": "ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e" 2025-06-02 17:32:18.690435 | orchestrator |  } 2025-06-02 17:32:18.691127 | orchestrator |  ] 2025-06-02 17:32:18.691277 | orchestrator |  } 2025-06-02 17:32:18.692025 | orchestrator | } 2025-06-02 17:32:18.692270 | orchestrator | 2025-06-02 17:32:18.692473 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:32:18.692721 | orchestrator | 2025-06-02 17:32:18.693167 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:32:18.693465 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.318) 0:00:25.364 *********** 2025-06-02 17:32:18.925683 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-02 17:32:18.926428 | orchestrator | 2025-06-02 17:32:18.928086 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:32:18.928765 | orchestrator | Monday 02 June 2025 17:32:18 +0000 (0:00:00.249) 0:00:25.614 *********** 2025-06-02 17:32:19.188587 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:19.189601 | orchestrator | 2025-06-02 17:32:19.190578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:19.191507 | orchestrator | Monday 02 June 2025 17:32:19 +0000 (0:00:00.262) 0:00:25.876 *********** 2025-06-02 17:32:19.672931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:32:19.673710 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:32:19.675343 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:32:19.675992 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:32:19.676971 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:32:19.677706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:32:19.678285 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:32:19.678833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:32:19.680008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-02 17:32:19.680348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:32:19.680973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:32:19.681469 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:32:19.682255 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:32:19.682990 | orchestrator | 2025-06-02 17:32:19.683859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:19.684155 | orchestrator | Monday 02 June 2025 17:32:19 +0000 (0:00:00.484) 0:00:26.361 *********** 2025-06-02 17:32:19.866272 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:19.866804 | orchestrator | 2025-06-02 17:32:19.867796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:19.868105 | orchestrator | Monday 02 June 2025 17:32:19 +0000 (0:00:00.194) 0:00:26.555 *********** 2025-06-02 17:32:20.068643 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:20.068851 | orchestrator | 2025-06-02 17:32:20.069674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:20.070810 | orchestrator | Monday 02 June 2025 17:32:20 +0000 (0:00:00.201) 0:00:26.756 *********** 2025-06-02 17:32:20.257214 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:20.257706 | orchestrator | 2025-06-02 17:32:20.258579 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:20.259124 | orchestrator | Monday 02 June 2025 17:32:20 +0000 (0:00:00.188) 0:00:26.945 *********** 2025-06-02 17:32:20.930222 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:20.930652 | orchestrator | 2025-06-02 17:32:20.931253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:20.931806 | orchestrator | Monday 02 June 2025 17:32:20 +0000 (0:00:00.671) 0:00:27.617 *********** 2025-06-02 17:32:21.139033 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:21.140854 | orchestrator | 2025-06-02 17:32:21.141785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:21.142659 | orchestrator | Monday 02 June 2025 17:32:21 +0000 (0:00:00.209) 0:00:27.826 *********** 2025-06-02 17:32:21.348555 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:21.349761 | orchestrator | 2025-06-02 17:32:21.351424 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:21.351578 | orchestrator | Monday 02 June 2025 17:32:21 +0000 (0:00:00.209) 0:00:28.036 *********** 2025-06-02 17:32:21.553866 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:21.554480 | orchestrator | 2025-06-02 17:32:21.555573 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:21.556386 | orchestrator | Monday 02 June 2025 17:32:21 +0000 (0:00:00.206) 0:00:28.242 *********** 2025-06-02 17:32:21.766404 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:21.767399 | orchestrator | 2025-06-02 17:32:21.768065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:21.768607 | orchestrator | Monday 02 June 2025 17:32:21 +0000 (0:00:00.211) 0:00:28.454 *********** 2025-06-02 17:32:22.189357 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84) 2025-06-02 17:32:22.189894 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84) 2025-06-02 17:32:22.190970 | orchestrator | 2025-06-02 17:32:22.191477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:22.192251 | orchestrator | Monday 02 June 2025 17:32:22 +0000 (0:00:00.423) 0:00:28.878 *********** 2025-06-02 17:32:22.610335 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3) 2025-06-02 17:32:22.611328 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3) 2025-06-02 17:32:22.612097 | orchestrator | 2025-06-02 17:32:22.613045 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:22.613915 | orchestrator | Monday 02 June 2025 17:32:22 +0000 (0:00:00.419) 0:00:29.297 *********** 2025-06-02 17:32:23.067934 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62) 2025-06-02 17:32:23.068372 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62) 2025-06-02 17:32:23.069288 | orchestrator | 2025-06-02 17:32:23.069753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:23.071652 | orchestrator | Monday 02 June 2025 17:32:23 +0000 (0:00:00.458) 0:00:29.756 *********** 2025-06-02 17:32:23.496026 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4) 2025-06-02 17:32:23.497273 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4) 2025-06-02 17:32:23.497922 | orchestrator | 2025-06-02 17:32:23.499095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:23.499796 | orchestrator | Monday 02 June 2025 17:32:23 +0000 (0:00:00.428) 0:00:30.184 *********** 2025-06-02 17:32:23.832498 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:32:23.832970 | orchestrator | 2025-06-02 17:32:23.834919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:23.835290 | orchestrator | Monday 02 June 2025 17:32:23 +0000 (0:00:00.335) 0:00:30.519 *********** 2025-06-02 17:32:24.484593 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-02 17:32:24.486139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-02 17:32:24.487322 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-02 17:32:24.488394 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-02 17:32:24.489052 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-02 17:32:24.489815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-02 17:32:24.490558 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-02 17:32:24.491465 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-02 17:32:24.492064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-02 17:32:24.492534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-02 17:32:24.493174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-02 17:32:24.493642 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-02 17:32:24.494225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-02 17:32:24.494682 | orchestrator | 2025-06-02 17:32:24.495251 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:24.495604 | orchestrator | Monday 02 June 2025 17:32:24 +0000 (0:00:00.653) 0:00:31.173 *********** 2025-06-02 17:32:24.709698 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:24.711056 | orchestrator | 2025-06-02 17:32:24.711094 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:24.712719 | orchestrator | Monday 02 June 2025 17:32:24 +0000 (0:00:00.223) 0:00:31.396 *********** 2025-06-02 17:32:24.908950 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:24.910127 | orchestrator | 2025-06-02 17:32:24.910632 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:24.911353 | orchestrator | Monday 02 June 2025 17:32:24 +0000 (0:00:00.200) 0:00:31.597 *********** 2025-06-02 17:32:25.112224 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:25.113596 | orchestrator | 2025-06-02 17:32:25.114443 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:25.115645 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.203) 0:00:31.800 *********** 2025-06-02 17:32:25.322080 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:25.322803 | orchestrator | 2025-06-02 17:32:25.324126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:25.324845 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.210) 0:00:32.010 *********** 2025-06-02 17:32:25.526695 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:25.526902 | orchestrator | 2025-06-02 17:32:25.528115 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:25.528605 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.204) 0:00:32.215 *********** 2025-06-02 17:32:25.739003 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:25.739401 | orchestrator | 2025-06-02 17:32:25.740722 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:25.741092 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.211) 0:00:32.426 *********** 2025-06-02 17:32:25.940496 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:25.940847 | orchestrator | 2025-06-02 17:32:25.942096 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:25.943847 | orchestrator | Monday 02 June 2025 17:32:25 +0000 (0:00:00.201) 0:00:32.628 *********** 2025-06-02 17:32:26.202605 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:26.202839 | orchestrator | 2025-06-02 17:32:26.204267 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:26.204773 | orchestrator | Monday 02 June 2025 17:32:26 +0000 (0:00:00.261) 0:00:32.890 *********** 2025-06-02 17:32:27.094358 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-02 17:32:27.095636 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-02 17:32:27.096926 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-02 17:32:27.098321 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-02 17:32:27.099190 | orchestrator | 2025-06-02 17:32:27.099862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:27.101194 | orchestrator | Monday 02 June 2025 17:32:27 +0000 (0:00:00.890) 0:00:33.781 *********** 2025-06-02 17:32:27.302234 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:27.302918 | orchestrator | 2025-06-02 17:32:27.305672 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:27.305704 | orchestrator | Monday 02 June 2025 17:32:27 +0000 (0:00:00.208) 0:00:33.989 *********** 2025-06-02 17:32:27.514685 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:27.514757 | orchestrator | 2025-06-02 17:32:27.516445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:27.517324 | orchestrator | Monday 02 June 2025 17:32:27 +0000 (0:00:00.213) 0:00:34.202 *********** 2025-06-02 17:32:28.249186 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:28.252481 | orchestrator | 2025-06-02 17:32:28.252548 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:28.253470 | orchestrator | Monday 02 June 2025 17:32:28 +0000 (0:00:00.732) 0:00:34.935 *********** 2025-06-02 17:32:28.450100 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:28.451041 | orchestrator | 2025-06-02 17:32:28.451725 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:32:28.452693 | orchestrator | Monday 02 June 2025 17:32:28 +0000 (0:00:00.203) 0:00:35.139 *********** 2025-06-02 17:32:28.595429 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:28.596565 | orchestrator | 2025-06-02 17:32:28.599061 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:32:28.599089 | orchestrator | Monday 02 June 2025 17:32:28 +0000 (0:00:00.144) 0:00:35.283 *********** 2025-06-02 17:32:28.795954 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}}) 2025-06-02 17:32:28.797458 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '26d332e8-3a94-5f56-adf2-82846ed63b84'}}) 2025-06-02 17:32:28.798101 | orchestrator | 2025-06-02 17:32:28.799039 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:32:28.799656 | orchestrator | Monday 02 June 2025 17:32:28 +0000 (0:00:00.198) 0:00:35.482 *********** 2025-06-02 17:32:30.665063 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}) 2025-06-02 17:32:30.665617 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'}) 2025-06-02 17:32:30.667005 | orchestrator | 2025-06-02 17:32:30.667016 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:32:30.667041 | orchestrator | Monday 02 June 2025 17:32:30 +0000 (0:00:01.870) 0:00:37.352 *********** 2025-06-02 17:32:30.839675 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:30.841165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:30.842927 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:30.844325 | orchestrator | 2025-06-02 17:32:30.845478 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:32:30.846431 | orchestrator | Monday 02 June 2025 17:32:30 +0000 (0:00:00.173) 0:00:37.526 *********** 2025-06-02 17:32:32.151174 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}) 2025-06-02 17:32:32.151446 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'}) 2025-06-02 17:32:32.151485 | orchestrator | 2025-06-02 17:32:32.152446 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:32:32.152974 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:01.312) 0:00:38.838 *********** 2025-06-02 17:32:32.306501 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:32.307227 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:32.308176 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:32.309186 | orchestrator | 2025-06-02 17:32:32.310883 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:32:32.311880 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:00.156) 0:00:38.995 *********** 2025-06-02 17:32:32.464442 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:32.464751 | orchestrator | 2025-06-02 17:32:32.464777 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:32:32.469212 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:00.156) 0:00:39.152 *********** 2025-06-02 17:32:32.627827 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:32.627926 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:32.629228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:32.629254 | orchestrator | 2025-06-02 17:32:32.629739 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:32:32.630432 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:00.161) 0:00:39.314 *********** 2025-06-02 17:32:32.779643 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:32.779977 | orchestrator | 2025-06-02 17:32:32.782112 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:32:32.783585 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:00.154) 0:00:39.468 *********** 2025-06-02 17:32:32.949739 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:32.951005 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:32.952921 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:32.954128 | orchestrator | 2025-06-02 17:32:32.956312 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:32:32.956688 | orchestrator | Monday 02 June 2025 17:32:32 +0000 (0:00:00.167) 0:00:39.636 *********** 2025-06-02 17:32:33.340650 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:33.341612 | orchestrator | 2025-06-02 17:32:33.343307 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:32:33.343348 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.392) 0:00:40.028 *********** 2025-06-02 17:32:33.500478 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:33.502165 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:33.504130 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:33.504192 | orchestrator | 2025-06-02 17:32:33.504724 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:32:33.506081 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.160) 0:00:40.188 *********** 2025-06-02 17:32:33.660330 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:33.660475 | orchestrator | 2025-06-02 17:32:33.663492 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:32:33.664616 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.158) 0:00:40.347 *********** 2025-06-02 17:32:33.817201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:33.818201 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:33.820013 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:33.820769 | orchestrator | 2025-06-02 17:32:33.821910 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:32:33.822511 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.158) 0:00:40.505 *********** 2025-06-02 17:32:33.984774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:33.986343 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:33.987562 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:33.988215 | orchestrator | 2025-06-02 17:32:33.989419 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:32:33.990655 | orchestrator | Monday 02 June 2025 17:32:33 +0000 (0:00:00.166) 0:00:40.671 *********** 2025-06-02 17:32:34.150982 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:34.152879 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:34.155028 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:34.156649 | orchestrator | 2025-06-02 17:32:34.157627 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:32:34.158177 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.167) 0:00:40.839 *********** 2025-06-02 17:32:34.293238 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:34.293446 | orchestrator | 2025-06-02 17:32:34.294413 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:32:34.295242 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.141) 0:00:40.981 *********** 2025-06-02 17:32:34.449607 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:34.449715 | orchestrator | 2025-06-02 17:32:34.449827 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:32:34.450823 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.157) 0:00:41.138 *********** 2025-06-02 17:32:34.598676 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:34.599361 | orchestrator | 2025-06-02 17:32:34.601439 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:32:34.602103 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.149) 0:00:41.287 *********** 2025-06-02 17:32:34.743410 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:32:34.744247 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:32:34.745083 | orchestrator | } 2025-06-02 17:32:34.747302 | orchestrator | 2025-06-02 17:32:34.747362 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:32:34.747377 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.143) 0:00:41.431 *********** 2025-06-02 17:32:34.896259 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:32:34.897182 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:32:34.898197 | orchestrator | } 2025-06-02 17:32:34.899751 | orchestrator | 2025-06-02 17:32:34.900792 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:32:34.901931 | orchestrator | Monday 02 June 2025 17:32:34 +0000 (0:00:00.152) 0:00:41.583 *********** 2025-06-02 17:32:35.038777 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:32:35.039828 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:32:35.039863 | orchestrator | } 2025-06-02 17:32:35.040765 | orchestrator | 2025-06-02 17:32:35.041484 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:32:35.042204 | orchestrator | Monday 02 June 2025 17:32:35 +0000 (0:00:00.144) 0:00:41.728 *********** 2025-06-02 17:32:35.767915 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:35.768743 | orchestrator | 2025-06-02 17:32:35.769481 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:32:35.770148 | orchestrator | Monday 02 June 2025 17:32:35 +0000 (0:00:00.728) 0:00:42.457 *********** 2025-06-02 17:32:36.294650 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:36.294734 | orchestrator | 2025-06-02 17:32:36.295292 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:32:36.295699 | orchestrator | Monday 02 June 2025 17:32:36 +0000 (0:00:00.525) 0:00:42.982 *********** 2025-06-02 17:32:36.797934 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:36.798394 | orchestrator | 2025-06-02 17:32:36.799324 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:32:36.800326 | orchestrator | Monday 02 June 2025 17:32:36 +0000 (0:00:00.502) 0:00:43.485 *********** 2025-06-02 17:32:36.986159 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:36.986320 | orchestrator | 2025-06-02 17:32:36.987101 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:32:36.987595 | orchestrator | Monday 02 June 2025 17:32:36 +0000 (0:00:00.188) 0:00:43.673 *********** 2025-06-02 17:32:37.116674 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.118640 | orchestrator | 2025-06-02 17:32:37.118928 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:32:37.120313 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.132) 0:00:43.806 *********** 2025-06-02 17:32:37.237793 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.238269 | orchestrator | 2025-06-02 17:32:37.241268 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:32:37.241629 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.119) 0:00:43.926 *********** 2025-06-02 17:32:37.382278 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:32:37.382989 | orchestrator |  "vgs_report": { 2025-06-02 17:32:37.384350 | orchestrator |  "vg": [] 2025-06-02 17:32:37.386227 | orchestrator |  } 2025-06-02 17:32:37.386254 | orchestrator | } 2025-06-02 17:32:37.387091 | orchestrator | 2025-06-02 17:32:37.387580 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:32:37.388076 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.144) 0:00:44.070 *********** 2025-06-02 17:32:37.525837 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.526080 | orchestrator | 2025-06-02 17:32:37.527064 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:32:37.527894 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.143) 0:00:44.214 *********** 2025-06-02 17:32:37.684022 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.685163 | orchestrator | 2025-06-02 17:32:37.685672 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:32:37.686620 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.157) 0:00:44.372 *********** 2025-06-02 17:32:37.827978 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.831930 | orchestrator | 2025-06-02 17:32:37.832324 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:32:37.832503 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.143) 0:00:44.515 *********** 2025-06-02 17:32:37.981991 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:37.982813 | orchestrator | 2025-06-02 17:32:37.983602 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:32:37.984326 | orchestrator | Monday 02 June 2025 17:32:37 +0000 (0:00:00.151) 0:00:44.667 *********** 2025-06-02 17:32:38.116045 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:38.117397 | orchestrator | 2025-06-02 17:32:38.117427 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:32:38.117441 | orchestrator | Monday 02 June 2025 17:32:38 +0000 (0:00:00.138) 0:00:44.805 *********** 2025-06-02 17:32:38.482105 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:38.482276 | orchestrator | 2025-06-02 17:32:38.483059 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:32:38.483150 | orchestrator | Monday 02 June 2025 17:32:38 +0000 (0:00:00.364) 0:00:45.170 *********** 2025-06-02 17:32:38.631705 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:38.631857 | orchestrator | 2025-06-02 17:32:38.632314 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:32:38.632604 | orchestrator | Monday 02 June 2025 17:32:38 +0000 (0:00:00.149) 0:00:45.320 *********** 2025-06-02 17:32:38.778613 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:38.779415 | orchestrator | 2025-06-02 17:32:38.780298 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:32:38.780329 | orchestrator | Monday 02 June 2025 17:32:38 +0000 (0:00:00.146) 0:00:45.467 *********** 2025-06-02 17:32:38.935509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:38.937744 | orchestrator | 2025-06-02 17:32:38.937778 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:32:38.940497 | orchestrator | Monday 02 June 2025 17:32:38 +0000 (0:00:00.156) 0:00:45.623 *********** 2025-06-02 17:32:39.086095 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.087259 | orchestrator | 2025-06-02 17:32:39.088169 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:32:39.088758 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.150) 0:00:45.774 *********** 2025-06-02 17:32:39.277562 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.278984 | orchestrator | 2025-06-02 17:32:39.279015 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:32:39.279697 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.189) 0:00:45.964 *********** 2025-06-02 17:32:39.430982 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.431919 | orchestrator | 2025-06-02 17:32:39.432975 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:32:39.433980 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.154) 0:00:46.118 *********** 2025-06-02 17:32:39.574071 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.574157 | orchestrator | 2025-06-02 17:32:39.575724 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:32:39.576507 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.143) 0:00:46.262 *********** 2025-06-02 17:32:39.718258 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.718439 | orchestrator | 2025-06-02 17:32:39.718974 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:32:39.719835 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.144) 0:00:46.406 *********** 2025-06-02 17:32:39.899583 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:39.899679 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:39.900488 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:39.901154 | orchestrator | 2025-06-02 17:32:39.901875 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:32:39.902790 | orchestrator | Monday 02 June 2025 17:32:39 +0000 (0:00:00.180) 0:00:46.587 *********** 2025-06-02 17:32:40.071737 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:40.071838 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:40.072248 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:40.072432 | orchestrator | 2025-06-02 17:32:40.073504 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:32:40.073615 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.171) 0:00:46.759 *********** 2025-06-02 17:32:40.245184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:40.245896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:40.246953 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:40.248268 | orchestrator | 2025-06-02 17:32:40.249122 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:32:40.249417 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.174) 0:00:46.933 *********** 2025-06-02 17:32:40.637760 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:40.637922 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:40.638509 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:40.639708 | orchestrator | 2025-06-02 17:32:40.640705 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:32:40.641685 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.391) 0:00:47.325 *********** 2025-06-02 17:32:40.792814 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:40.793657 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:40.794481 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:40.795471 | orchestrator | 2025-06-02 17:32:40.796265 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:32:40.796768 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.156) 0:00:47.481 *********** 2025-06-02 17:32:40.953415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:40.953596 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:40.953690 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:40.954232 | orchestrator | 2025-06-02 17:32:40.954890 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:32:40.956134 | orchestrator | Monday 02 June 2025 17:32:40 +0000 (0:00:00.160) 0:00:47.642 *********** 2025-06-02 17:32:41.123418 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:41.124887 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:41.125697 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:41.126478 | orchestrator | 2025-06-02 17:32:41.127597 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:32:41.128280 | orchestrator | Monday 02 June 2025 17:32:41 +0000 (0:00:00.168) 0:00:47.810 *********** 2025-06-02 17:32:41.291319 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:41.291778 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:41.293043 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:41.293733 | orchestrator | 2025-06-02 17:32:41.294554 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:32:41.294859 | orchestrator | Monday 02 June 2025 17:32:41 +0000 (0:00:00.169) 0:00:47.979 *********** 2025-06-02 17:32:41.808658 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:41.810766 | orchestrator | 2025-06-02 17:32:41.811272 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:32:41.811949 | orchestrator | Monday 02 June 2025 17:32:41 +0000 (0:00:00.517) 0:00:48.497 *********** 2025-06-02 17:32:42.325253 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:42.325835 | orchestrator | 2025-06-02 17:32:42.325991 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:32:42.326563 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.513) 0:00:49.010 *********** 2025-06-02 17:32:42.462901 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:32:42.463069 | orchestrator | 2025-06-02 17:32:42.465047 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:32:42.466949 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.140) 0:00:49.151 *********** 2025-06-02 17:32:42.660550 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'vg_name': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'}) 2025-06-02 17:32:42.660722 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'vg_name': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}) 2025-06-02 17:32:42.661380 | orchestrator | 2025-06-02 17:32:42.662129 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:32:42.662922 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.198) 0:00:49.349 *********** 2025-06-02 17:32:42.813694 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:42.813912 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:42.814354 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:42.814977 | orchestrator | 2025-06-02 17:32:42.816649 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:32:42.818009 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.153) 0:00:49.503 *********** 2025-06-02 17:32:42.980911 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:42.981903 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:42.982699 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:42.983725 | orchestrator | 2025-06-02 17:32:42.984533 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:32:42.985387 | orchestrator | Monday 02 June 2025 17:32:42 +0000 (0:00:00.166) 0:00:49.669 *********** 2025-06-02 17:32:43.147751 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'})  2025-06-02 17:32:43.148706 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'})  2025-06-02 17:32:43.149235 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:32:43.149414 | orchestrator | 2025-06-02 17:32:43.150078 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:32:43.150385 | orchestrator | Monday 02 June 2025 17:32:43 +0000 (0:00:00.164) 0:00:49.833 *********** 2025-06-02 17:32:43.584891 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:32:43.586382 | orchestrator |  "lvm_report": { 2025-06-02 17:32:43.587569 | orchestrator |  "lv": [ 2025-06-02 17:32:43.588207 | orchestrator |  { 2025-06-02 17:32:43.590191 | orchestrator |  "lv_name": "osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84", 2025-06-02 17:32:43.591884 | orchestrator |  "vg_name": "ceph-26d332e8-3a94-5f56-adf2-82846ed63b84" 2025-06-02 17:32:43.593072 | orchestrator |  }, 2025-06-02 17:32:43.594161 | orchestrator |  { 2025-06-02 17:32:43.595165 | orchestrator |  "lv_name": "osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7", 2025-06-02 17:32:43.596198 | orchestrator |  "vg_name": "ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7" 2025-06-02 17:32:43.597243 | orchestrator |  } 2025-06-02 17:32:43.597807 | orchestrator |  ], 2025-06-02 17:32:43.598377 | orchestrator |  "pv": [ 2025-06-02 17:32:43.599389 | orchestrator |  { 2025-06-02 17:32:43.599859 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:32:43.600203 | orchestrator |  "vg_name": "ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7" 2025-06-02 17:32:43.600753 | orchestrator |  }, 2025-06-02 17:32:43.601200 | orchestrator |  { 2025-06-02 17:32:43.601642 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:32:43.602120 | orchestrator |  "vg_name": "ceph-26d332e8-3a94-5f56-adf2-82846ed63b84" 2025-06-02 17:32:43.602465 | orchestrator |  } 2025-06-02 17:32:43.602894 | orchestrator |  ] 2025-06-02 17:32:43.603275 | orchestrator |  } 2025-06-02 17:32:43.603739 | orchestrator | } 2025-06-02 17:32:43.604422 | orchestrator | 2025-06-02 17:32:43.604582 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-02 17:32:43.604903 | orchestrator | 2025-06-02 17:32:43.605227 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 17:32:43.605633 | orchestrator | Monday 02 June 2025 17:32:43 +0000 (0:00:00.440) 0:00:50.274 *********** 2025-06-02 17:32:43.808406 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-02 17:32:43.808652 | orchestrator | 2025-06-02 17:32:43.809081 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-02 17:32:43.809434 | orchestrator | Monday 02 June 2025 17:32:43 +0000 (0:00:00.223) 0:00:50.498 *********** 2025-06-02 17:32:44.020339 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:44.022090 | orchestrator | 2025-06-02 17:32:44.022254 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:44.022748 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:00.211) 0:00:50.709 *********** 2025-06-02 17:32:44.402385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:32:44.403107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:32:44.404128 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:32:44.405016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:32:44.405682 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:32:44.406375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:32:44.407058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:32:44.407446 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:32:44.408286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-02 17:32:44.408769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:32:44.409142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:32:44.409780 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:32:44.410231 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:32:44.410583 | orchestrator | 2025-06-02 17:32:44.410979 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:44.411407 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:00.381) 0:00:51.091 *********** 2025-06-02 17:32:44.603928 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:44.604824 | orchestrator | 2025-06-02 17:32:44.605383 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:44.606370 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:00.202) 0:00:51.293 *********** 2025-06-02 17:32:44.782628 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:44.784277 | orchestrator | 2025-06-02 17:32:44.784906 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:44.786111 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:00.178) 0:00:51.471 *********** 2025-06-02 17:32:44.986319 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:44.987005 | orchestrator | 2025-06-02 17:32:44.987690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:44.988339 | orchestrator | Monday 02 June 2025 17:32:44 +0000 (0:00:00.203) 0:00:51.675 *********** 2025-06-02 17:32:45.160409 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:45.160687 | orchestrator | 2025-06-02 17:32:45.161542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:45.162147 | orchestrator | Monday 02 June 2025 17:32:45 +0000 (0:00:00.172) 0:00:51.848 *********** 2025-06-02 17:32:45.362412 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:45.364695 | orchestrator | 2025-06-02 17:32:45.364741 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:45.366746 | orchestrator | Monday 02 June 2025 17:32:45 +0000 (0:00:00.203) 0:00:52.051 *********** 2025-06-02 17:32:45.853775 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:45.854703 | orchestrator | 2025-06-02 17:32:45.855352 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:45.855381 | orchestrator | Monday 02 June 2025 17:32:45 +0000 (0:00:00.491) 0:00:52.542 *********** 2025-06-02 17:32:46.055561 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:46.055669 | orchestrator | 2025-06-02 17:32:46.055891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:46.057061 | orchestrator | Monday 02 June 2025 17:32:46 +0000 (0:00:00.202) 0:00:52.745 *********** 2025-06-02 17:32:46.251414 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:46.251948 | orchestrator | 2025-06-02 17:32:46.252590 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:46.253586 | orchestrator | Monday 02 June 2025 17:32:46 +0000 (0:00:00.194) 0:00:52.939 *********** 2025-06-02 17:32:46.634846 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6) 2025-06-02 17:32:46.636686 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6) 2025-06-02 17:32:46.637126 | orchestrator | 2025-06-02 17:32:46.638146 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:46.638889 | orchestrator | Monday 02 June 2025 17:32:46 +0000 (0:00:00.384) 0:00:53.323 *********** 2025-06-02 17:32:47.019187 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d) 2025-06-02 17:32:47.021701 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d) 2025-06-02 17:32:47.021750 | orchestrator | 2025-06-02 17:32:47.022492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:47.023480 | orchestrator | Monday 02 June 2025 17:32:47 +0000 (0:00:00.384) 0:00:53.708 *********** 2025-06-02 17:32:47.396863 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd) 2025-06-02 17:32:47.396964 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd) 2025-06-02 17:32:47.396978 | orchestrator | 2025-06-02 17:32:47.397054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:47.397316 | orchestrator | Monday 02 June 2025 17:32:47 +0000 (0:00:00.376) 0:00:54.085 *********** 2025-06-02 17:32:47.817775 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6) 2025-06-02 17:32:47.818936 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6) 2025-06-02 17:32:47.819078 | orchestrator | 2025-06-02 17:32:47.819972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-02 17:32:47.820439 | orchestrator | Monday 02 June 2025 17:32:47 +0000 (0:00:00.421) 0:00:54.506 *********** 2025-06-02 17:32:48.137315 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-02 17:32:48.139058 | orchestrator | 2025-06-02 17:32:48.139597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:48.140490 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.317) 0:00:54.823 *********** 2025-06-02 17:32:48.523451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-02 17:32:48.524320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-02 17:32:48.524507 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-02 17:32:48.526407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-02 17:32:48.527235 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-02 17:32:48.528556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-02 17:32:48.530139 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-02 17:32:48.530649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-02 17:32:48.531356 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-02 17:32:48.532167 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-02 17:32:48.533147 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-02 17:32:48.533592 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-02 17:32:48.534320 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-02 17:32:48.535305 | orchestrator | 2025-06-02 17:32:48.536012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:48.536701 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.388) 0:00:55.212 *********** 2025-06-02 17:32:48.764775 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:48.764873 | orchestrator | 2025-06-02 17:32:48.765394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:48.766084 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.240) 0:00:55.453 *********** 2025-06-02 17:32:48.998420 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:49.000178 | orchestrator | 2025-06-02 17:32:49.000247 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:49.000390 | orchestrator | Monday 02 June 2025 17:32:48 +0000 (0:00:00.233) 0:00:55.687 *********** 2025-06-02 17:32:49.591256 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:49.592791 | orchestrator | 2025-06-02 17:32:49.594077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:49.595241 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.592) 0:00:56.279 *********** 2025-06-02 17:32:49.831095 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:49.831611 | orchestrator | 2025-06-02 17:32:49.832737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:49.833924 | orchestrator | Monday 02 June 2025 17:32:49 +0000 (0:00:00.241) 0:00:56.520 *********** 2025-06-02 17:32:50.068029 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:50.068495 | orchestrator | 2025-06-02 17:32:50.069608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:50.070608 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.236) 0:00:56.757 *********** 2025-06-02 17:32:50.257992 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:50.258268 | orchestrator | 2025-06-02 17:32:50.258985 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:50.259866 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.188) 0:00:56.946 *********** 2025-06-02 17:32:50.450795 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:50.450930 | orchestrator | 2025-06-02 17:32:50.451012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:50.451406 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.193) 0:00:57.140 *********** 2025-06-02 17:32:50.633924 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:50.634202 | orchestrator | 2025-06-02 17:32:50.635216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:50.636938 | orchestrator | Monday 02 June 2025 17:32:50 +0000 (0:00:00.182) 0:00:57.322 *********** 2025-06-02 17:32:51.249464 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-02 17:32:51.249935 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-02 17:32:51.251400 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-02 17:32:51.252342 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-02 17:32:51.253276 | orchestrator | 2025-06-02 17:32:51.253835 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:51.254829 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:00.615) 0:00:57.937 *********** 2025-06-02 17:32:51.433437 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:51.433737 | orchestrator | 2025-06-02 17:32:51.435260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:51.435378 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:00.185) 0:00:58.122 *********** 2025-06-02 17:32:51.630737 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:51.631428 | orchestrator | 2025-06-02 17:32:51.632967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:51.635538 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:00.197) 0:00:58.319 *********** 2025-06-02 17:32:51.812805 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:51.813294 | orchestrator | 2025-06-02 17:32:51.814273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-02 17:32:51.815671 | orchestrator | Monday 02 June 2025 17:32:51 +0000 (0:00:00.182) 0:00:58.502 *********** 2025-06-02 17:32:52.042000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:52.042428 | orchestrator | 2025-06-02 17:32:52.042943 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-02 17:32:52.044217 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:00.227) 0:00:58.730 *********** 2025-06-02 17:32:52.329166 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:52.329410 | orchestrator | 2025-06-02 17:32:52.330163 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-02 17:32:52.331627 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:00.288) 0:00:59.018 *********** 2025-06-02 17:32:52.509436 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7944d10b-922c-5cd9-bd54-91ce5496d9bc'}}) 2025-06-02 17:32:52.509685 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '455b12e9-4014-57cf-aec2-de5d805a7d14'}}) 2025-06-02 17:32:52.510303 | orchestrator | 2025-06-02 17:32:52.511122 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-02 17:32:52.512135 | orchestrator | Monday 02 June 2025 17:32:52 +0000 (0:00:00.180) 0:00:59.198 *********** 2025-06-02 17:32:54.329209 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'}) 2025-06-02 17:32:54.329322 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'}) 2025-06-02 17:32:54.329920 | orchestrator | 2025-06-02 17:32:54.330717 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-02 17:32:54.331335 | orchestrator | Monday 02 June 2025 17:32:54 +0000 (0:00:01.815) 0:01:01.014 *********** 2025-06-02 17:32:54.469958 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:54.470396 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:54.471104 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:54.471783 | orchestrator | 2025-06-02 17:32:54.472737 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-02 17:32:54.473298 | orchestrator | Monday 02 June 2025 17:32:54 +0000 (0:00:00.144) 0:01:01.159 *********** 2025-06-02 17:32:55.805125 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'}) 2025-06-02 17:32:55.805893 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'}) 2025-06-02 17:32:55.806823 | orchestrator | 2025-06-02 17:32:55.807781 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-02 17:32:55.808498 | orchestrator | Monday 02 June 2025 17:32:55 +0000 (0:00:01.332) 0:01:02.492 *********** 2025-06-02 17:32:55.967711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:55.969005 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:55.969171 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:55.971574 | orchestrator | 2025-06-02 17:32:55.971953 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-02 17:32:55.972869 | orchestrator | Monday 02 June 2025 17:32:55 +0000 (0:00:00.164) 0:01:02.656 *********** 2025-06-02 17:32:56.113566 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.114597 | orchestrator | 2025-06-02 17:32:56.115456 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-02 17:32:56.116901 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.145) 0:01:02.802 *********** 2025-06-02 17:32:56.282439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:56.283279 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:56.283718 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.284451 | orchestrator | 2025-06-02 17:32:56.285455 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-02 17:32:56.286358 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.168) 0:01:02.971 *********** 2025-06-02 17:32:56.404983 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.405156 | orchestrator | 2025-06-02 17:32:56.405958 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-02 17:32:56.406697 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.119) 0:01:03.090 *********** 2025-06-02 17:32:56.621072 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:56.621198 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:56.621334 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.622137 | orchestrator | 2025-06-02 17:32:56.622857 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-02 17:32:56.623580 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.219) 0:01:03.310 *********** 2025-06-02 17:32:56.767469 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.768577 | orchestrator | 2025-06-02 17:32:56.768859 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-02 17:32:56.770112 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.145) 0:01:03.456 *********** 2025-06-02 17:32:56.929401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:56.932579 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:56.932648 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:56.932989 | orchestrator | 2025-06-02 17:32:56.933149 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-02 17:32:56.933763 | orchestrator | Monday 02 June 2025 17:32:56 +0000 (0:00:00.161) 0:01:03.617 *********** 2025-06-02 17:32:57.050500 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:57.050887 | orchestrator | 2025-06-02 17:32:57.051452 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-02 17:32:57.051928 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.121) 0:01:03.739 *********** 2025-06-02 17:32:57.344914 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:57.345352 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:57.345670 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:57.347482 | orchestrator | 2025-06-02 17:32:57.349717 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-02 17:32:57.349966 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.294) 0:01:04.034 *********** 2025-06-02 17:32:57.486809 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:57.486911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:57.486925 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:57.487043 | orchestrator | 2025-06-02 17:32:57.487062 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-02 17:32:57.487111 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.140) 0:01:04.175 *********** 2025-06-02 17:32:57.639380 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:32:57.640189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:32:57.641412 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:57.642218 | orchestrator | 2025-06-02 17:32:57.643031 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-02 17:32:57.643583 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.153) 0:01:04.328 *********** 2025-06-02 17:32:57.763179 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:57.763275 | orchestrator | 2025-06-02 17:32:57.763288 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-02 17:32:57.763854 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.123) 0:01:04.451 *********** 2025-06-02 17:32:57.892197 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:57.894180 | orchestrator | 2025-06-02 17:32:57.894349 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-02 17:32:57.895499 | orchestrator | Monday 02 June 2025 17:32:57 +0000 (0:00:00.129) 0:01:04.580 *********** 2025-06-02 17:32:58.029086 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:32:58.029945 | orchestrator | 2025-06-02 17:32:58.030401 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-02 17:32:58.031501 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.136) 0:01:04.717 *********** 2025-06-02 17:32:58.179147 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:58.181338 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-02 17:32:58.182793 | orchestrator | } 2025-06-02 17:32:58.183216 | orchestrator | 2025-06-02 17:32:58.184085 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-02 17:32:58.184557 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.149) 0:01:04.866 *********** 2025-06-02 17:32:58.314068 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:58.314674 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-02 17:32:58.315725 | orchestrator | } 2025-06-02 17:32:58.317585 | orchestrator | 2025-06-02 17:32:58.318004 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-02 17:32:58.318911 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.136) 0:01:05.003 *********** 2025-06-02 17:32:58.451390 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:32:58.453019 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-02 17:32:58.454002 | orchestrator | } 2025-06-02 17:32:58.455336 | orchestrator | 2025-06-02 17:32:58.455871 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-02 17:32:58.456776 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.137) 0:01:05.140 *********** 2025-06-02 17:32:58.935938 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:58.936370 | orchestrator | 2025-06-02 17:32:58.937983 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-02 17:32:58.938537 | orchestrator | Monday 02 June 2025 17:32:58 +0000 (0:00:00.484) 0:01:05.625 *********** 2025-06-02 17:32:59.425901 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:59.426173 | orchestrator | 2025-06-02 17:32:59.426757 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-02 17:32:59.428270 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:00.489) 0:01:06.115 *********** 2025-06-02 17:32:59.938941 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:32:59.939113 | orchestrator | 2025-06-02 17:32:59.940215 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-02 17:32:59.940879 | orchestrator | Monday 02 June 2025 17:32:59 +0000 (0:00:00.510) 0:01:06.626 *********** 2025-06-02 17:33:00.218820 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:00.218930 | orchestrator | 2025-06-02 17:33:00.218945 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-02 17:33:00.219080 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.282) 0:01:06.908 *********** 2025-06-02 17:33:00.336803 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:00.338082 | orchestrator | 2025-06-02 17:33:00.338490 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-02 17:33:00.339482 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.116) 0:01:07.025 *********** 2025-06-02 17:33:00.452934 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:00.453047 | orchestrator | 2025-06-02 17:33:00.453063 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-02 17:33:00.453147 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.115) 0:01:07.141 *********** 2025-06-02 17:33:00.580821 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:33:00.581016 | orchestrator |  "vgs_report": { 2025-06-02 17:33:00.581814 | orchestrator |  "vg": [] 2025-06-02 17:33:00.582088 | orchestrator |  } 2025-06-02 17:33:00.584104 | orchestrator | } 2025-06-02 17:33:00.584607 | orchestrator | 2025-06-02 17:33:00.586682 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-02 17:33:00.586856 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.127) 0:01:07.268 *********** 2025-06-02 17:33:00.699952 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:00.700744 | orchestrator | 2025-06-02 17:33:00.701254 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-02 17:33:00.702109 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.119) 0:01:07.388 *********** 2025-06-02 17:33:00.827930 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:00.829654 | orchestrator | 2025-06-02 17:33:00.830308 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-02 17:33:00.832342 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.128) 0:01:07.517 *********** 2025-06-02 17:33:00.958244 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:00.958473 | orchestrator | 2025-06-02 17:33:00.959529 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-02 17:33:00.960320 | orchestrator | Monday 02 June 2025 17:33:00 +0000 (0:00:00.129) 0:01:07.647 *********** 2025-06-02 17:33:01.099037 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.099414 | orchestrator | 2025-06-02 17:33:01.100395 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-02 17:33:01.101920 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.141) 0:01:07.788 *********** 2025-06-02 17:33:01.231063 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.231626 | orchestrator | 2025-06-02 17:33:01.232469 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-02 17:33:01.233245 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.131) 0:01:07.919 *********** 2025-06-02 17:33:01.352211 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.352381 | orchestrator | 2025-06-02 17:33:01.353432 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-02 17:33:01.353912 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.121) 0:01:08.041 *********** 2025-06-02 17:33:01.470687 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.471061 | orchestrator | 2025-06-02 17:33:01.472267 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-02 17:33:01.472939 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.117) 0:01:08.159 *********** 2025-06-02 17:33:01.600478 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.601570 | orchestrator | 2025-06-02 17:33:01.602142 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-02 17:33:01.602673 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.129) 0:01:08.289 *********** 2025-06-02 17:33:01.868692 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.869695 | orchestrator | 2025-06-02 17:33:01.870558 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-02 17:33:01.871444 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.268) 0:01:08.557 *********** 2025-06-02 17:33:01.993378 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:01.993469 | orchestrator | 2025-06-02 17:33:01.994108 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-02 17:33:01.994781 | orchestrator | Monday 02 June 2025 17:33:01 +0000 (0:00:00.124) 0:01:08.681 *********** 2025-06-02 17:33:02.127677 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.128185 | orchestrator | 2025-06-02 17:33:02.129968 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-02 17:33:02.130595 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.133) 0:01:08.815 *********** 2025-06-02 17:33:02.269881 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.271006 | orchestrator | 2025-06-02 17:33:02.271955 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-02 17:33:02.273522 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.142) 0:01:08.958 *********** 2025-06-02 17:33:02.395438 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.395716 | orchestrator | 2025-06-02 17:33:02.396223 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-02 17:33:02.397064 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.125) 0:01:09.084 *********** 2025-06-02 17:33:02.532447 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.532691 | orchestrator | 2025-06-02 17:33:02.535069 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-02 17:33:02.535095 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.137) 0:01:09.221 *********** 2025-06-02 17:33:02.673969 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:02.675161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:02.675560 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.676186 | orchestrator | 2025-06-02 17:33:02.677650 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-02 17:33:02.678088 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.140) 0:01:09.362 *********** 2025-06-02 17:33:02.823854 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:02.824677 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:02.825366 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.826729 | orchestrator | 2025-06-02 17:33:02.827223 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-02 17:33:02.828156 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.148) 0:01:09.510 *********** 2025-06-02 17:33:02.969601 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:02.971015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:02.971118 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:02.972911 | orchestrator | 2025-06-02 17:33:02.974152 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-02 17:33:02.974864 | orchestrator | Monday 02 June 2025 17:33:02 +0000 (0:00:00.147) 0:01:09.658 *********** 2025-06-02 17:33:03.100828 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:03.101054 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:03.103132 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:03.103896 | orchestrator | 2025-06-02 17:33:03.103922 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-02 17:33:03.104783 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:00.131) 0:01:09.789 *********** 2025-06-02 17:33:03.235917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:03.238483 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:03.238678 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:03.238755 | orchestrator | 2025-06-02 17:33:03.239461 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-02 17:33:03.239897 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:00.135) 0:01:09.925 *********** 2025-06-02 17:33:03.380474 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:03.381064 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:03.383190 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:03.383225 | orchestrator | 2025-06-02 17:33:03.383643 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-02 17:33:03.384590 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:00.144) 0:01:10.069 *********** 2025-06-02 17:33:03.675304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:03.676123 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:03.676789 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:03.677541 | orchestrator | 2025-06-02 17:33:03.678149 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-02 17:33:03.678758 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:00.294) 0:01:10.364 *********** 2025-06-02 17:33:03.819567 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:03.819701 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:03.820363 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:03.821217 | orchestrator | 2025-06-02 17:33:03.821834 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-02 17:33:03.822802 | orchestrator | Monday 02 June 2025 17:33:03 +0000 (0:00:00.143) 0:01:10.508 *********** 2025-06-02 17:33:04.361711 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:04.363085 | orchestrator | 2025-06-02 17:33:04.363757 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-02 17:33:04.365133 | orchestrator | Monday 02 June 2025 17:33:04 +0000 (0:00:00.541) 0:01:11.049 *********** 2025-06-02 17:33:04.919195 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:04.919934 | orchestrator | 2025-06-02 17:33:04.920872 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-02 17:33:04.921434 | orchestrator | Monday 02 June 2025 17:33:04 +0000 (0:00:00.557) 0:01:11.607 *********** 2025-06-02 17:33:05.053426 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:05.054247 | orchestrator | 2025-06-02 17:33:05.055270 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-02 17:33:05.056496 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.134) 0:01:11.741 *********** 2025-06-02 17:33:05.211602 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'vg_name': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'}) 2025-06-02 17:33:05.213055 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'vg_name': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'}) 2025-06-02 17:33:05.213749 | orchestrator | 2025-06-02 17:33:05.214381 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-02 17:33:05.214985 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.158) 0:01:11.900 *********** 2025-06-02 17:33:05.357383 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:05.358672 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:05.359444 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:05.360717 | orchestrator | 2025-06-02 17:33:05.361564 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-02 17:33:05.362118 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.145) 0:01:12.046 *********** 2025-06-02 17:33:05.552604 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:05.552713 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:05.553396 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:05.553821 | orchestrator | 2025-06-02 17:33:05.555238 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-02 17:33:05.556174 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.194) 0:01:12.241 *********** 2025-06-02 17:33:05.726130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'})  2025-06-02 17:33:05.726282 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'})  2025-06-02 17:33:05.726771 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:05.727707 | orchestrator | 2025-06-02 17:33:05.728471 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-02 17:33:05.729157 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.171) 0:01:12.413 *********** 2025-06-02 17:33:05.851769 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:33:05.852182 | orchestrator |  "lvm_report": { 2025-06-02 17:33:05.852987 | orchestrator |  "lv": [ 2025-06-02 17:33:05.854199 | orchestrator |  { 2025-06-02 17:33:05.854690 | orchestrator |  "lv_name": "osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14", 2025-06-02 17:33:05.855293 | orchestrator |  "vg_name": "ceph-455b12e9-4014-57cf-aec2-de5d805a7d14" 2025-06-02 17:33:05.856122 | orchestrator |  }, 2025-06-02 17:33:05.856808 | orchestrator |  { 2025-06-02 17:33:05.858014 | orchestrator |  "lv_name": "osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc", 2025-06-02 17:33:05.858999 | orchestrator |  "vg_name": "ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc" 2025-06-02 17:33:05.859484 | orchestrator |  } 2025-06-02 17:33:05.860568 | orchestrator |  ], 2025-06-02 17:33:05.861113 | orchestrator |  "pv": [ 2025-06-02 17:33:05.861910 | orchestrator |  { 2025-06-02 17:33:05.862584 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-02 17:33:05.863020 | orchestrator |  "vg_name": "ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc" 2025-06-02 17:33:05.863606 | orchestrator |  }, 2025-06-02 17:33:05.864551 | orchestrator |  { 2025-06-02 17:33:05.865195 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-02 17:33:05.866115 | orchestrator |  "vg_name": "ceph-455b12e9-4014-57cf-aec2-de5d805a7d14" 2025-06-02 17:33:05.866668 | orchestrator |  } 2025-06-02 17:33:05.867582 | orchestrator |  ] 2025-06-02 17:33:05.868222 | orchestrator |  } 2025-06-02 17:33:05.869183 | orchestrator | } 2025-06-02 17:33:05.870182 | orchestrator | 2025-06-02 17:33:05.870784 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:05.871021 | orchestrator | 2025-06-02 17:33:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:33:05.871255 | orchestrator | 2025-06-02 17:33:05 | INFO  | Please wait and do not abort execution. 2025-06-02 17:33:05.871629 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:33:05.872096 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:33:05.873020 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-02 17:33:05.873048 | orchestrator | 2025-06-02 17:33:05.873796 | orchestrator | 2025-06-02 17:33:05.874805 | orchestrator | 2025-06-02 17:33:05.876022 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:05.876559 | orchestrator | Monday 02 June 2025 17:33:05 +0000 (0:00:00.127) 0:01:12.540 *********** 2025-06-02 17:33:05.877269 | orchestrator | =============================================================================== 2025-06-02 17:33:05.877930 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2025-06-02 17:33:05.878627 | orchestrator | Create block LVs -------------------------------------------------------- 4.10s 2025-06-02 17:33:05.879300 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.88s 2025-06-02 17:33:05.879924 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.58s 2025-06-02 17:33:05.881208 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.56s 2025-06-02 17:33:05.881910 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.53s 2025-06-02 17:33:05.882138 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2025-06-02 17:33:05.882628 | orchestrator | Add known partitions to the list of available block devices ------------- 1.45s 2025-06-02 17:33:05.882668 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2025-06-02 17:33:05.882849 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-06-02 17:33:05.884111 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2025-06-02 17:33:05.884524 | orchestrator | Print LVM report data --------------------------------------------------- 0.89s 2025-06-02 17:33:05.885625 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-06-02 17:33:05.886646 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.75s 2025-06-02 17:33:05.887438 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-06-02 17:33:05.888238 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-06-02 17:33:05.890799 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.71s 2025-06-02 17:33:05.891359 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.70s 2025-06-02 17:33:05.892246 | orchestrator | Get initial list of available block devices ----------------------------- 0.70s 2025-06-02 17:33:05.892633 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.69s 2025-06-02 17:33:07.888452 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:33:07.888590 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:33:07.888606 | orchestrator | Registering Redlock._release_script 2025-06-02 17:33:07.944699 | orchestrator | 2025-06-02 17:33:07 | INFO  | Task 6d1400eb-3014-4005-adae-ef8e4f7a8cfa (facts) was prepared for execution. 2025-06-02 17:33:07.944797 | orchestrator | 2025-06-02 17:33:07 | INFO  | It takes a moment until task 6d1400eb-3014-4005-adae-ef8e4f7a8cfa (facts) has been started and output is visible here. 2025-06-02 17:33:11.856666 | orchestrator | 2025-06-02 17:33:11.857245 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 17:33:11.859732 | orchestrator | 2025-06-02 17:33:11.861154 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 17:33:11.862309 | orchestrator | Monday 02 June 2025 17:33:11 +0000 (0:00:00.263) 0:00:00.263 *********** 2025-06-02 17:33:12.942083 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:12.942280 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:12.943035 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:12.943318 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:12.943760 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:12.946263 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:12.946306 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:12.946318 | orchestrator | 2025-06-02 17:33:12.947332 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 17:33:12.947489 | orchestrator | Monday 02 June 2025 17:33:12 +0000 (0:00:01.083) 0:00:01.347 *********** 2025-06-02 17:33:13.188965 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:33:13.268895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:33:13.343138 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:33:13.421601 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:33:13.504853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:33:14.209812 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:33:14.211579 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:14.213026 | orchestrator | 2025-06-02 17:33:14.213886 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 17:33:14.215038 | orchestrator | 2025-06-02 17:33:14.216021 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 17:33:14.220281 | orchestrator | Monday 02 June 2025 17:33:14 +0000 (0:00:01.269) 0:00:02.616 *********** 2025-06-02 17:33:19.209239 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:33:19.210435 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:33:19.210595 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:33:19.214491 | orchestrator | ok: [testbed-manager] 2025-06-02 17:33:19.215085 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:33:19.215782 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:33:19.217029 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:33:19.218476 | orchestrator | 2025-06-02 17:33:19.220275 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 17:33:19.220951 | orchestrator | 2025-06-02 17:33:19.222188 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 17:33:19.222921 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:05.002) 0:00:07.619 *********** 2025-06-02 17:33:19.354842 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:33:19.431222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:33:19.500163 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:33:19.571543 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:33:19.648055 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:33:19.683975 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:33:19.685138 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:33:19.686361 | orchestrator | 2025-06-02 17:33:19.688154 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:19.689080 | orchestrator | 2025-06-02 17:33:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 17:33:19.689374 | orchestrator | 2025-06-02 17:33:19 | INFO  | Please wait and do not abort execution. 2025-06-02 17:33:19.690200 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.690550 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.691552 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.692354 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.693305 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.693697 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.694307 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:33:19.695055 | orchestrator | 2025-06-02 17:33:19.695874 | orchestrator | 2025-06-02 17:33:19.696310 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:19.696756 | orchestrator | Monday 02 June 2025 17:33:19 +0000 (0:00:00.475) 0:00:08.094 *********** 2025-06-02 17:33:19.697151 | orchestrator | =============================================================================== 2025-06-02 17:33:19.697536 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.00s 2025-06-02 17:33:19.698049 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.27s 2025-06-02 17:33:19.698503 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.08s 2025-06-02 17:33:19.698927 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.48s 2025-06-02 17:33:20.166673 | orchestrator | 2025-06-02 17:33:20.169062 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Mon Jun 2 17:33:20 UTC 2025 2025-06-02 17:33:20.169103 | orchestrator | 2025-06-02 17:33:21.670656 | orchestrator | 2025-06-02 17:33:21 | INFO  | Collection nutshell is prepared for execution 2025-06-02 17:33:21.670754 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [0] - dotfiles 2025-06-02 17:33:21.676552 | orchestrator | Registering Redlock._acquired_script 2025-06-02 17:33:21.676627 | orchestrator | Registering Redlock._extend_script 2025-06-02 17:33:21.676640 | orchestrator | Registering Redlock._release_script 2025-06-02 17:33:21.681752 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [0] - homer 2025-06-02 17:33:21.682078 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [0] - netdata 2025-06-02 17:33:21.682312 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [0] - openstackclient 2025-06-02 17:33:21.682330 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [0] - phpmyadmin 2025-06-02 17:33:21.682340 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [0] - common 2025-06-02 17:33:21.685037 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [1] -- loadbalancer 2025-06-02 17:33:21.685084 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [2] --- opensearch 2025-06-02 17:33:21.685310 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [2] --- mariadb-ng 2025-06-02 17:33:21.685584 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [3] ---- horizon 2025-06-02 17:33:21.685724 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [3] ---- keystone 2025-06-02 17:33:21.685740 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [4] ----- neutron 2025-06-02 17:33:21.685899 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ wait-for-nova 2025-06-02 17:33:21.685916 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [5] ------ octavia 2025-06-02 17:33:21.686442 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- barbican 2025-06-02 17:33:21.686738 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- designate 2025-06-02 17:33:21.686820 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- ironic 2025-06-02 17:33:21.686833 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- placement 2025-06-02 17:33:21.686961 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- magnum 2025-06-02 17:33:21.687769 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [1] -- openvswitch 2025-06-02 17:33:21.687957 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [2] --- ovn 2025-06-02 17:33:21.688055 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [1] -- memcached 2025-06-02 17:33:21.688264 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [1] -- redis 2025-06-02 17:33:21.688370 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [1] -- rabbitmq-ng 2025-06-02 17:33:21.688437 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [0] - kubernetes 2025-06-02 17:33:21.691188 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [1] -- kubeconfig 2025-06-02 17:33:21.691219 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [1] -- copy-kubeconfig 2025-06-02 17:33:21.691380 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [0] - ceph 2025-06-02 17:33:21.693233 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [1] -- ceph-pools 2025-06-02 17:33:21.693322 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [2] --- copy-ceph-keys 2025-06-02 17:33:21.693336 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [3] ---- cephclient 2025-06-02 17:33:21.693443 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-02 17:33:21.693507 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [4] ----- wait-for-keystone 2025-06-02 17:33:21.693679 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-02 17:33:21.693773 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ glance 2025-06-02 17:33:21.693866 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ cinder 2025-06-02 17:33:21.694095 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ nova 2025-06-02 17:33:21.694440 | orchestrator | 2025-06-02 17:33:21 | INFO  | A [4] ----- prometheus 2025-06-02 17:33:21.694620 | orchestrator | 2025-06-02 17:33:21 | INFO  | D [5] ------ grafana 2025-06-02 17:33:21.885846 | orchestrator | 2025-06-02 17:33:21 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-02 17:33:21.888121 | orchestrator | 2025-06-02 17:33:21 | INFO  | Tasks are running in the background 2025-06-02 17:33:24.386164 | orchestrator | 2025-06-02 17:33:24 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-02 17:33:26.512746 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:26.515177 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:26.515420 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:26.517698 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:26.518378 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:26.520985 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:26.522046 | orchestrator | 2025-06-02 17:33:26 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:26.522104 | orchestrator | 2025-06-02 17:33:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:29.569391 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:29.574005 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:29.574586 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:29.574795 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:29.582088 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:29.585022 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:29.585407 | orchestrator | 2025-06-02 17:33:29 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:29.585582 | orchestrator | 2025-06-02 17:33:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:32.629411 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:32.632842 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:32.635504 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:32.635954 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:32.636469 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:32.636994 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:32.638419 | orchestrator | 2025-06-02 17:33:32 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:32.638451 | orchestrator | 2025-06-02 17:33:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:35.676390 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:35.676499 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:35.676545 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:35.676564 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:35.676582 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:35.676628 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:35.676639 | orchestrator | 2025-06-02 17:33:35 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:35.676649 | orchestrator | 2025-06-02 17:33:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:38.709332 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:38.709420 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:38.709488 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:38.710005 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:38.712457 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:38.712907 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:38.714136 | orchestrator | 2025-06-02 17:33:38 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:38.714154 | orchestrator | 2025-06-02 17:33:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:41.751447 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:41.754253 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:41.758757 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:41.758893 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:41.760468 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:41.761939 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:41.763078 | orchestrator | 2025-06-02 17:33:41 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:41.767022 | orchestrator | 2025-06-02 17:33:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:44.820654 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:44.835393 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:44.835498 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:44.835514 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:44.835598 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:44.838111 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:44.842619 | orchestrator | 2025-06-02 17:33:44 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:44.842690 | orchestrator | 2025-06-02 17:33:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:47.927961 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:47.929263 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:47.931850 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:47.937139 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:47.937180 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:47.944028 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:47.944087 | orchestrator | 2025-06-02 17:33:47 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:47.944098 | orchestrator | 2025-06-02 17:33:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:51.052806 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:51.056322 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:51.062997 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:51.075139 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:51.077889 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:51.084808 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:51.086360 | orchestrator | 2025-06-02 17:33:51 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:51.086413 | orchestrator | 2025-06-02 17:33:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:54.146169 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:54.148938 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:54.150942 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:54.153018 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:54.155147 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:54.156886 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state STARTED 2025-06-02 17:33:54.158658 | orchestrator | 2025-06-02 17:33:54 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:54.159043 | orchestrator | 2025-06-02 17:33:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:33:57.259231 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:33:57.261015 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:33:57.266697 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:33:57.270356 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:33:57.272406 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:33:57.278142 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:33:57.279154 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 3dd7c122-5025-46eb-bfa2-09f11af01489 is in state SUCCESS 2025-06-02 17:33:57.279518 | orchestrator | 2025-06-02 17:33:57.279585 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-02 17:33:57.279599 | orchestrator | 2025-06-02 17:33:57.279611 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-02 17:33:57.279622 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:00.772) 0:00:00.772 *********** 2025-06-02 17:33:57.279635 | orchestrator | changed: [testbed-manager] 2025-06-02 17:33:57.279655 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:33:57.279672 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:33:57.279689 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:33:57.279708 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:33:57.279727 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:33:57.279745 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:33:57.279763 | orchestrator | 2025-06-02 17:33:57.279780 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-02 17:33:57.279792 | orchestrator | Monday 02 June 2025 17:33:39 +0000 (0:00:04.557) 0:00:05.330 *********** 2025-06-02 17:33:57.279803 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:57.279814 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:57.279825 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:57.279836 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:57.279846 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:57.279857 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:57.279868 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:57.279878 | orchestrator | 2025-06-02 17:33:57.279890 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-02 17:33:57.279900 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:01.728) 0:00:07.058 *********** 2025-06-02 17:33:57.279917 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:39.886094', 'end': '2025-06-02 17:33:39.894881', 'delta': '0:00:00.008787', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.279933 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:39.910825', 'end': '2025-06-02 17:33:39.917782', 'delta': '0:00:00.006957', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.279956 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:39.842648', 'end': '2025-06-02 17:33:39.849658', 'delta': '0:00:00.007010', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.280044 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:39.984169', 'end': '2025-06-02 17:33:39.992488', 'delta': '0:00:00.008319', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.280060 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:40.189066', 'end': '2025-06-02 17:33:40.195461', 'delta': '0:00:00.006395', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.280072 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:40.544271', 'end': '2025-06-02 17:33:40.550183', 'delta': '0:00:00.005912', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.280084 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-02 17:33:40.737671', 'end': '2025-06-02 17:33:40.747223', 'delta': '0:00:00.009552', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-02 17:33:57.280104 | orchestrator | 2025-06-02 17:33:57.280117 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-02 17:33:57.280137 | orchestrator | Monday 02 June 2025 17:33:44 +0000 (0:00:03.527) 0:00:10.586 *********** 2025-06-02 17:33:57.280156 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:57.280170 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:57.280181 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:57.280191 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:57.280202 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:57.280212 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:57.280223 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:57.280233 | orchestrator | 2025-06-02 17:33:57.280244 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-02 17:33:57.280255 | orchestrator | Monday 02 June 2025 17:33:48 +0000 (0:00:04.293) 0:00:14.881 *********** 2025-06-02 17:33:57.280266 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-02 17:33:57.280277 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-02 17:33:57.280288 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-02 17:33:57.280299 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-02 17:33:57.280309 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-02 17:33:57.280320 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-02 17:33:57.280331 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-02 17:33:57.280341 | orchestrator | 2025-06-02 17:33:57.280353 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:33:57.280372 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280836 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280875 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280893 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280904 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280915 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.280986 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:33:57.281002 | orchestrator | 2025-06-02 17:33:57.281013 | orchestrator | 2025-06-02 17:33:57.281024 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:33:57.281035 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:05.424) 0:00:20.306 *********** 2025-06-02 17:33:57.281045 | orchestrator | =============================================================================== 2025-06-02 17:33:57.281056 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 5.42s 2025-06-02 17:33:57.281067 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.56s 2025-06-02 17:33:57.281078 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 4.29s 2025-06-02 17:33:57.281089 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.53s 2025-06-02 17:33:57.281100 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.73s 2025-06-02 17:33:57.287129 | orchestrator | 2025-06-02 17:33:57 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:33:57.287188 | orchestrator | 2025-06-02 17:33:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:00.375502 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:00.375679 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:34:00.376693 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:00.377110 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:00.377893 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:00.378794 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:00.379428 | orchestrator | 2025-06-02 17:34:00 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:00.380219 | orchestrator | 2025-06-02 17:34:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:03.504880 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:03.514925 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:34:03.517995 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:03.527095 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:03.536953 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:03.545779 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:03.545927 | orchestrator | 2025-06-02 17:34:03 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:03.548237 | orchestrator | 2025-06-02 17:34:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:06.624290 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:06.631475 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:34:06.633332 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:06.637646 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:06.644037 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:06.644389 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:06.646951 | orchestrator | 2025-06-02 17:34:06 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:06.647015 | orchestrator | 2025-06-02 17:34:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:09.761720 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:09.767292 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:34:09.767353 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:09.767358 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:09.768518 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:09.770442 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:09.771897 | orchestrator | 2025-06-02 17:34:09 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:09.772340 | orchestrator | 2025-06-02 17:34:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:12.860716 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:12.866804 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state STARTED 2025-06-02 17:34:12.866901 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:12.872263 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:12.872404 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:12.878846 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:12.881143 | orchestrator | 2025-06-02 17:34:12 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:12.881185 | orchestrator | 2025-06-02 17:34:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:16.062876 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:16.062976 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task e6c2e861-3546-42e1-be62-e1a076ea7646 is in state SUCCESS 2025-06-02 17:34:16.063997 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:16.069503 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:16.074530 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:16.079771 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:16.082157 | orchestrator | 2025-06-02 17:34:16 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:16.084492 | orchestrator | 2025-06-02 17:34:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:19.142337 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:19.151189 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:19.152122 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:19.155844 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:19.161577 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:19.163096 | orchestrator | 2025-06-02 17:34:19 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:19.163182 | orchestrator | 2025-06-02 17:34:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:22.230655 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:22.232516 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:22.232604 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:22.240828 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:22.252646 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:22.252741 | orchestrator | 2025-06-02 17:34:22 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:22.252752 | orchestrator | 2025-06-02 17:34:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:25.313458 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:25.316924 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:25.316982 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:25.316990 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:25.320328 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:25.320372 | orchestrator | 2025-06-02 17:34:25 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:25.320381 | orchestrator | 2025-06-02 17:34:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:28.398067 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:28.401303 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:28.411464 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:28.416802 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:28.423449 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:28.423526 | orchestrator | 2025-06-02 17:34:28 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:28.423536 | orchestrator | 2025-06-02 17:34:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:31.541118 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:31.546396 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:31.547107 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:31.552891 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:31.555074 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:31.561119 | orchestrator | 2025-06-02 17:34:31 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:31.561207 | orchestrator | 2025-06-02 17:34:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:34.638628 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:34.642610 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:34.646545 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state STARTED 2025-06-02 17:34:34.650272 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:34.657161 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:34.662351 | orchestrator | 2025-06-02 17:34:34 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:34.662430 | orchestrator | 2025-06-02 17:34:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:37.737323 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:37.737473 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:37.738655 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task d792a834-542b-4268-8b8f-3f59fbc41492 is in state SUCCESS 2025-06-02 17:34:37.740378 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:37.742345 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:37.743337 | orchestrator | 2025-06-02 17:34:37 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:37.743391 | orchestrator | 2025-06-02 17:34:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:40.812094 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:40.816955 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:40.818145 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:40.820297 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:40.822210 | orchestrator | 2025-06-02 17:34:40 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:40.822254 | orchestrator | 2025-06-02 17:34:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:43.869994 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:43.870209 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:43.871774 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:43.877321 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:43.882938 | orchestrator | 2025-06-02 17:34:43 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:43.882990 | orchestrator | 2025-06-02 17:34:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:46.924165 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:46.924304 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:46.925240 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:46.925873 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:46.930092 | orchestrator | 2025-06-02 17:34:46 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:46.930140 | orchestrator | 2025-06-02 17:34:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:50.023809 | orchestrator | 2025-06-02 17:34:50 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:50.027092 | orchestrator | 2025-06-02 17:34:50 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:50.028247 | orchestrator | 2025-06-02 17:34:50 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:50.032125 | orchestrator | 2025-06-02 17:34:50 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:50.033543 | orchestrator | 2025-06-02 17:34:50 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:50.033838 | orchestrator | 2025-06-02 17:34:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:53.182386 | orchestrator | 2025-06-02 17:34:53 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:53.188708 | orchestrator | 2025-06-02 17:34:53 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:53.191008 | orchestrator | 2025-06-02 17:34:53 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:53.193280 | orchestrator | 2025-06-02 17:34:53 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state STARTED 2025-06-02 17:34:53.195171 | orchestrator | 2025-06-02 17:34:53 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:53.195331 | orchestrator | 2025-06-02 17:34:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:56.267623 | orchestrator | 2025-06-02 17:34:56 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:56.270852 | orchestrator | 2025-06-02 17:34:56 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:56.273818 | orchestrator | 2025-06-02 17:34:56 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:56.278958 | orchestrator | 2025-06-02 17:34:56 | INFO  | Task 419951f6-c525-4c1d-bcd1-7a5edc681554 is in state SUCCESS 2025-06-02 17:34:56.280535 | orchestrator | 2025-06-02 17:34:56.280586 | orchestrator | 2025-06-02 17:34:56.280594 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-02 17:34:56.280603 | orchestrator | 2025-06-02 17:34:56.280610 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-02 17:34:56.280618 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:01.165) 0:00:01.165 *********** 2025-06-02 17:34:56.280625 | orchestrator | ok: [testbed-manager] => { 2025-06-02 17:34:56.280633 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-02 17:34:56.280641 | orchestrator | } 2025-06-02 17:34:56.280649 | orchestrator | 2025-06-02 17:34:56.280656 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-02 17:34:56.280663 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:00.342) 0:00:01.507 *********** 2025-06-02 17:34:56.280669 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.280676 | orchestrator | 2025-06-02 17:34:56.280695 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-02 17:34:56.280702 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:01.593) 0:00:03.101 *********** 2025-06-02 17:34:56.280708 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-02 17:34:56.280716 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-02 17:34:56.280723 | orchestrator | 2025-06-02 17:34:56.280730 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-02 17:34:56.280737 | orchestrator | Monday 02 June 2025 17:33:39 +0000 (0:00:00.984) 0:00:04.086 *********** 2025-06-02 17:34:56.280743 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.280750 | orchestrator | 2025-06-02 17:34:56.280757 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-02 17:34:56.280763 | orchestrator | Monday 02 June 2025 17:33:41 +0000 (0:00:02.203) 0:00:06.289 *********** 2025-06-02 17:34:56.280770 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.280777 | orchestrator | 2025-06-02 17:34:56.280784 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-02 17:34:56.280790 | orchestrator | Monday 02 June 2025 17:33:43 +0000 (0:00:02.247) 0:00:08.537 *********** 2025-06-02 17:34:56.280797 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-02 17:34:56.280804 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.280812 | orchestrator | 2025-06-02 17:34:56.280818 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-02 17:34:56.280825 | orchestrator | Monday 02 June 2025 17:34:10 +0000 (0:00:26.527) 0:00:35.064 *********** 2025-06-02 17:34:56.280831 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.280838 | orchestrator | 2025-06-02 17:34:56.280845 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:34:56.280852 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.280860 | orchestrator | 2025-06-02 17:34:56.280866 | orchestrator | 2025-06-02 17:34:56.280877 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:34:56.280883 | orchestrator | Monday 02 June 2025 17:34:13 +0000 (0:00:03.094) 0:00:38.159 *********** 2025-06-02 17:34:56.280890 | orchestrator | =============================================================================== 2025-06-02 17:34:56.280897 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.53s 2025-06-02 17:34:56.280904 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.09s 2025-06-02 17:34:56.280911 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.25s 2025-06-02 17:34:56.280918 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.20s 2025-06-02 17:34:56.280925 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.59s 2025-06-02 17:34:56.280932 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.98s 2025-06-02 17:34:56.280939 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.34s 2025-06-02 17:34:56.280946 | orchestrator | 2025-06-02 17:34:56.280953 | orchestrator | 2025-06-02 17:34:56.280959 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-02 17:34:56.280965 | orchestrator | 2025-06-02 17:34:56.280972 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-02 17:34:56.280979 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:01.284) 0:00:01.284 *********** 2025-06-02 17:34:56.280986 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-02 17:34:56.280994 | orchestrator | 2025-06-02 17:34:56.281001 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-02 17:34:56.281007 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.543) 0:00:01.827 *********** 2025-06-02 17:34:56.281018 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-02 17:34:56.281025 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-02 17:34:56.281032 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-02 17:34:56.281039 | orchestrator | 2025-06-02 17:34:56.281045 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-02 17:34:56.281052 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:01.445) 0:00:03.272 *********** 2025-06-02 17:34:56.281059 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281066 | orchestrator | 2025-06-02 17:34:56.281074 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-02 17:34:56.281080 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:01.498) 0:00:04.771 *********** 2025-06-02 17:34:56.281096 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-02 17:34:56.281102 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281109 | orchestrator | 2025-06-02 17:34:56.281169 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-02 17:34:56.281178 | orchestrator | Monday 02 June 2025 17:34:23 +0000 (0:00:43.484) 0:00:48.256 *********** 2025-06-02 17:34:56.281184 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281190 | orchestrator | 2025-06-02 17:34:56.281197 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-02 17:34:56.281203 | orchestrator | Monday 02 June 2025 17:34:24 +0000 (0:00:01.158) 0:00:49.414 *********** 2025-06-02 17:34:56.281209 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281215 | orchestrator | 2025-06-02 17:34:56.281222 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-02 17:34:56.281228 | orchestrator | Monday 02 June 2025 17:34:26 +0000 (0:00:01.691) 0:00:51.105 *********** 2025-06-02 17:34:56.281235 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281241 | orchestrator | 2025-06-02 17:34:56.281247 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-02 17:34:56.281253 | orchestrator | Monday 02 June 2025 17:34:29 +0000 (0:00:03.072) 0:00:54.178 *********** 2025-06-02 17:34:56.281259 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281265 | orchestrator | 2025-06-02 17:34:56.281272 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-02 17:34:56.281278 | orchestrator | Monday 02 June 2025 17:34:31 +0000 (0:00:01.873) 0:00:56.052 *********** 2025-06-02 17:34:56.281284 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281290 | orchestrator | 2025-06-02 17:34:56.281296 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-02 17:34:56.281302 | orchestrator | Monday 02 June 2025 17:34:33 +0000 (0:00:01.862) 0:00:57.915 *********** 2025-06-02 17:34:56.281308 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281315 | orchestrator | 2025-06-02 17:34:56.281321 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:34:56.281327 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.281333 | orchestrator | 2025-06-02 17:34:56.281339 | orchestrator | 2025-06-02 17:34:56.281346 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:34:56.281352 | orchestrator | Monday 02 June 2025 17:34:34 +0000 (0:00:00.809) 0:00:58.725 *********** 2025-06-02 17:34:56.281358 | orchestrator | =============================================================================== 2025-06-02 17:34:56.281364 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 43.48s 2025-06-02 17:34:56.281370 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.07s 2025-06-02 17:34:56.281376 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.87s 2025-06-02 17:34:56.281383 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.86s 2025-06-02 17:34:56.281394 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.69s 2025-06-02 17:34:56.281400 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.50s 2025-06-02 17:34:56.281423 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.45s 2025-06-02 17:34:56.281430 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.16s 2025-06-02 17:34:56.281436 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.81s 2025-06-02 17:34:56.281442 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.54s 2025-06-02 17:34:56.281448 | orchestrator | 2025-06-02 17:34:56.281454 | orchestrator | 2025-06-02 17:34:56.281461 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:34:56.281467 | orchestrator | 2025-06-02 17:34:56.281473 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:34:56.281479 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:00.938) 0:00:00.938 *********** 2025-06-02 17:34:56.281485 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-02 17:34:56.281491 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-02 17:34:56.281498 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-02 17:34:56.281504 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-02 17:34:56.281510 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-02 17:34:56.281517 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-02 17:34:56.281524 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-02 17:34:56.281530 | orchestrator | 2025-06-02 17:34:56.281536 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-02 17:34:56.281541 | orchestrator | 2025-06-02 17:34:56.281545 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-02 17:34:56.281549 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:02.437) 0:00:03.376 *********** 2025-06-02 17:34:56.281559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:34:56.281588 | orchestrator | 2025-06-02 17:34:56.281593 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-02 17:34:56.281597 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:02.437) 0:00:05.814 *********** 2025-06-02 17:34:56.281601 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281605 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:34:56.281608 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:34:56.281612 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:34:56.281616 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:34:56.281624 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:34:56.281628 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:34:56.281631 | orchestrator | 2025-06-02 17:34:56.281635 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-02 17:34:56.281639 | orchestrator | Monday 02 June 2025 17:33:44 +0000 (0:00:03.761) 0:00:09.576 *********** 2025-06-02 17:34:56.281643 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281646 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:34:56.281650 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:34:56.281653 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:34:56.281657 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:34:56.281661 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:34:56.281664 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:34:56.281668 | orchestrator | 2025-06-02 17:34:56.281672 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-02 17:34:56.281676 | orchestrator | Monday 02 June 2025 17:33:50 +0000 (0:00:06.469) 0:00:16.045 *********** 2025-06-02 17:34:56.281685 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281689 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:34:56.281692 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:34:56.281696 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:34:56.281700 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:34:56.281703 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:34:56.281707 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:34:56.281711 | orchestrator | 2025-06-02 17:34:56.281715 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-02 17:34:56.281718 | orchestrator | Monday 02 June 2025 17:33:55 +0000 (0:00:04.264) 0:00:20.310 *********** 2025-06-02 17:34:56.281722 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:34:56.281726 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:34:56.281729 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:34:56.281733 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281737 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:34:56.281740 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:34:56.281744 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:34:56.281748 | orchestrator | 2025-06-02 17:34:56.281751 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-02 17:34:56.281755 | orchestrator | Monday 02 June 2025 17:34:05 +0000 (0:00:10.244) 0:00:30.554 *********** 2025-06-02 17:34:56.281759 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:34:56.281762 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:34:56.281766 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:34:56.281770 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281773 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:34:56.281777 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:34:56.281780 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:34:56.281784 | orchestrator | 2025-06-02 17:34:56.281788 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-02 17:34:56.281792 | orchestrator | Monday 02 June 2025 17:34:24 +0000 (0:00:19.310) 0:00:49.865 *********** 2025-06-02 17:34:56.281798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:34:56.281803 | orchestrator | 2025-06-02 17:34:56.281806 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-02 17:34:56.281810 | orchestrator | Monday 02 June 2025 17:34:27 +0000 (0:00:03.025) 0:00:52.890 *********** 2025-06-02 17:34:56.281814 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-02 17:34:56.281818 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-02 17:34:56.281821 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-02 17:34:56.281825 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-02 17:34:56.281829 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-02 17:34:56.281832 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-02 17:34:56.281836 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-02 17:34:56.281840 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-02 17:34:56.281843 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-02 17:34:56.281847 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-02 17:34:56.281851 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-02 17:34:56.281854 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-02 17:34:56.281858 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-02 17:34:56.281862 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-02 17:34:56.281865 | orchestrator | 2025-06-02 17:34:56.281869 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-02 17:34:56.281873 | orchestrator | Monday 02 June 2025 17:34:37 +0000 (0:00:10.259) 0:01:03.150 *********** 2025-06-02 17:34:56.281879 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281883 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:34:56.281887 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:34:56.281890 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:34:56.281894 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:34:56.281898 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:34:56.281901 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:34:56.281905 | orchestrator | 2025-06-02 17:34:56.281909 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-02 17:34:56.281912 | orchestrator | Monday 02 June 2025 17:34:39 +0000 (0:00:01.497) 0:01:04.647 *********** 2025-06-02 17:34:56.281916 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.281920 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:34:56.281924 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:34:56.281927 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:34:56.281931 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:34:56.281935 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:34:56.281938 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:34:56.281942 | orchestrator | 2025-06-02 17:34:56.281946 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-02 17:34:56.281952 | orchestrator | Monday 02 June 2025 17:34:41 +0000 (0:00:02.358) 0:01:07.006 *********** 2025-06-02 17:34:56.281956 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.281959 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:34:56.281963 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:34:56.281967 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:34:56.281970 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:34:56.281974 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:34:56.281977 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:34:56.281981 | orchestrator | 2025-06-02 17:34:56.281985 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-02 17:34:56.281989 | orchestrator | Monday 02 June 2025 17:34:43 +0000 (0:00:02.075) 0:01:09.081 *********** 2025-06-02 17:34:56.281992 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:34:56.281996 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:34:56.282000 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:34:56.282003 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:34:56.282007 | orchestrator | ok: [testbed-manager] 2025-06-02 17:34:56.282010 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:34:56.282044 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:34:56.282048 | orchestrator | 2025-06-02 17:34:56.282051 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-02 17:34:56.282055 | orchestrator | Monday 02 June 2025 17:34:46 +0000 (0:00:02.441) 0:01:11.523 *********** 2025-06-02 17:34:56.282059 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-02 17:34:56.282064 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:34:56.282067 | orchestrator | 2025-06-02 17:34:56.282071 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-02 17:34:56.282075 | orchestrator | Monday 02 June 2025 17:34:48 +0000 (0:00:02.060) 0:01:13.583 *********** 2025-06-02 17:34:56.282079 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.282083 | orchestrator | 2025-06-02 17:34:56.282086 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-02 17:34:56.282090 | orchestrator | Monday 02 June 2025 17:34:50 +0000 (0:00:02.583) 0:01:16.166 *********** 2025-06-02 17:34:56.282094 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:34:56.282098 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:34:56.282101 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:34:56.282105 | orchestrator | changed: [testbed-manager] 2025-06-02 17:34:56.282109 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:34:56.282165 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:34:56.282173 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:34:56.282179 | orchestrator | 2025-06-02 17:34:56.282185 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:34:56.282192 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282202 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282209 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282213 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282217 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282221 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282224 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:34:56.282228 | orchestrator | 2025-06-02 17:34:56.282232 | orchestrator | 2025-06-02 17:34:56.282236 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:34:56.282239 | orchestrator | Monday 02 June 2025 17:34:54 +0000 (0:00:03.928) 0:01:20.095 *********** 2025-06-02 17:34:56.282243 | orchestrator | =============================================================================== 2025-06-02 17:34:56.282247 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 19.31s 2025-06-02 17:34:56.282250 | orchestrator | osism.services.netdata : Copy configuration files ---------------------- 10.26s 2025-06-02 17:34:56.282254 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.24s 2025-06-02 17:34:56.282258 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 6.47s 2025-06-02 17:34:56.282262 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.27s 2025-06-02 17:34:56.282265 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.93s 2025-06-02 17:34:56.282269 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.76s 2025-06-02 17:34:56.282272 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.03s 2025-06-02 17:34:56.282276 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.58s 2025-06-02 17:34:56.282280 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.44s 2025-06-02 17:34:56.282283 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.44s 2025-06-02 17:34:56.282291 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.44s 2025-06-02 17:34:56.282294 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.36s 2025-06-02 17:34:56.282298 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.08s 2025-06-02 17:34:56.282302 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.06s 2025-06-02 17:34:56.282305 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.50s 2025-06-02 17:34:56.282625 | orchestrator | 2025-06-02 17:34:56 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:56.282672 | orchestrator | 2025-06-02 17:34:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:34:59.330809 | orchestrator | 2025-06-02 17:34:59 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:34:59.331751 | orchestrator | 2025-06-02 17:34:59 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:34:59.336580 | orchestrator | 2025-06-02 17:34:59 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:34:59.341727 | orchestrator | 2025-06-02 17:34:59 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:34:59.341789 | orchestrator | 2025-06-02 17:34:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:02.388837 | orchestrator | 2025-06-02 17:35:02 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:02.393042 | orchestrator | 2025-06-02 17:35:02 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:02.394942 | orchestrator | 2025-06-02 17:35:02 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:02.397980 | orchestrator | 2025-06-02 17:35:02 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:02.398148 | orchestrator | 2025-06-02 17:35:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:05.452404 | orchestrator | 2025-06-02 17:35:05 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:05.453538 | orchestrator | 2025-06-02 17:35:05 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:05.457285 | orchestrator | 2025-06-02 17:35:05 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:05.457325 | orchestrator | 2025-06-02 17:35:05 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:05.457332 | orchestrator | 2025-06-02 17:35:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:08.498287 | orchestrator | 2025-06-02 17:35:08 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:08.499005 | orchestrator | 2025-06-02 17:35:08 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:08.502471 | orchestrator | 2025-06-02 17:35:08 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:08.502541 | orchestrator | 2025-06-02 17:35:08 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:08.502551 | orchestrator | 2025-06-02 17:35:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:11.540727 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:11.541965 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:11.543262 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:11.544728 | orchestrator | 2025-06-02 17:35:11 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:11.544740 | orchestrator | 2025-06-02 17:35:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:14.582146 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:14.582536 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:14.583681 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:14.584397 | orchestrator | 2025-06-02 17:35:14 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:14.584489 | orchestrator | 2025-06-02 17:35:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:17.616798 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:17.616911 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:17.617303 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:17.618416 | orchestrator | 2025-06-02 17:35:17 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:17.618745 | orchestrator | 2025-06-02 17:35:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:20.661534 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:20.662873 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:20.664893 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:20.665942 | orchestrator | 2025-06-02 17:35:20 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:20.665958 | orchestrator | 2025-06-02 17:35:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:23.716447 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:23.716846 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:23.718413 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:23.719971 | orchestrator | 2025-06-02 17:35:23 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:23.720003 | orchestrator | 2025-06-02 17:35:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:26.765312 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:26.765995 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:26.767490 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:26.768992 | orchestrator | 2025-06-02 17:35:26 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:26.769029 | orchestrator | 2025-06-02 17:35:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:29.825739 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:29.827086 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:29.829999 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:29.831878 | orchestrator | 2025-06-02 17:35:29 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:29.831979 | orchestrator | 2025-06-02 17:35:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:32.886290 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:32.891075 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:32.894132 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:32.897450 | orchestrator | 2025-06-02 17:35:32 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:32.897490 | orchestrator | 2025-06-02 17:35:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:35.961406 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:35.964458 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:35.966559 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:35.968797 | orchestrator | 2025-06-02 17:35:35 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:35.974436 | orchestrator | 2025-06-02 17:35:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:39.044855 | orchestrator | 2025-06-02 17:35:39 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:39.046673 | orchestrator | 2025-06-02 17:35:39 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:39.048546 | orchestrator | 2025-06-02 17:35:39 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:39.050612 | orchestrator | 2025-06-02 17:35:39 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:39.050728 | orchestrator | 2025-06-02 17:35:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:42.106536 | orchestrator | 2025-06-02 17:35:42 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:42.107161 | orchestrator | 2025-06-02 17:35:42 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:42.108703 | orchestrator | 2025-06-02 17:35:42 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state STARTED 2025-06-02 17:35:42.110468 | orchestrator | 2025-06-02 17:35:42 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:42.110499 | orchestrator | 2025-06-02 17:35:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:45.172931 | orchestrator | 2025-06-02 17:35:45 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:45.175685 | orchestrator | 2025-06-02 17:35:45 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:45.176444 | orchestrator | 2025-06-02 17:35:45 | INFO  | Task aa813d09-9b30-40cc-bb84-c40fb3ee563e is in state SUCCESS 2025-06-02 17:35:45.177995 | orchestrator | 2025-06-02 17:35:45 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:45.178290 | orchestrator | 2025-06-02 17:35:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:48.216663 | orchestrator | 2025-06-02 17:35:48 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:48.219151 | orchestrator | 2025-06-02 17:35:48 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:48.220622 | orchestrator | 2025-06-02 17:35:48 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:48.220671 | orchestrator | 2025-06-02 17:35:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:51.274125 | orchestrator | 2025-06-02 17:35:51 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:51.275601 | orchestrator | 2025-06-02 17:35:51 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:51.276892 | orchestrator | 2025-06-02 17:35:51 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:51.277069 | orchestrator | 2025-06-02 17:35:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:54.319939 | orchestrator | 2025-06-02 17:35:54 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:54.322357 | orchestrator | 2025-06-02 17:35:54 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:54.324007 | orchestrator | 2025-06-02 17:35:54 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:54.324829 | orchestrator | 2025-06-02 17:35:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:35:57.364562 | orchestrator | 2025-06-02 17:35:57 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:35:57.365942 | orchestrator | 2025-06-02 17:35:57 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:35:57.368927 | orchestrator | 2025-06-02 17:35:57 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:35:57.368993 | orchestrator | 2025-06-02 17:35:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:00.409976 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:00.410761 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:00.412037 | orchestrator | 2025-06-02 17:36:00 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:00.412078 | orchestrator | 2025-06-02 17:36:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:03.457002 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:03.458661 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:03.460349 | orchestrator | 2025-06-02 17:36:03 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:03.460434 | orchestrator | 2025-06-02 17:36:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:06.500336 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:06.500482 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:06.500503 | orchestrator | 2025-06-02 17:36:06 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:06.500517 | orchestrator | 2025-06-02 17:36:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:09.568454 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:09.570381 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:09.571158 | orchestrator | 2025-06-02 17:36:09 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:09.571259 | orchestrator | 2025-06-02 17:36:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:12.623474 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:12.625763 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:12.628943 | orchestrator | 2025-06-02 17:36:12 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:12.629649 | orchestrator | 2025-06-02 17:36:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:15.681295 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:15.681400 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:15.681414 | orchestrator | 2025-06-02 17:36:15 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state STARTED 2025-06-02 17:36:15.681425 | orchestrator | 2025-06-02 17:36:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:18.724170 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:18.724308 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:18.724323 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:18.724351 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:18.724361 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:18.727191 | orchestrator | 2025-06-02 17:36:18 | INFO  | Task 3cb75ead-09d5-4667-b50e-4731182bd71b is in state SUCCESS 2025-06-02 17:36:18.735577 | orchestrator | 2025-06-02 17:36:18.735667 | orchestrator | 2025-06-02 17:36:18.735677 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-02 17:36:18.735687 | orchestrator | 2025-06-02 17:36:18.735695 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-02 17:36:18.735704 | orchestrator | Monday 02 June 2025 17:34:04 +0000 (0:00:00.388) 0:00:00.388 *********** 2025-06-02 17:36:18.735713 | orchestrator | ok: [testbed-manager] 2025-06-02 17:36:18.735722 | orchestrator | 2025-06-02 17:36:18.735730 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-02 17:36:18.735739 | orchestrator | Monday 02 June 2025 17:34:07 +0000 (0:00:02.304) 0:00:02.692 *********** 2025-06-02 17:36:18.735747 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-02 17:36:18.735755 | orchestrator | 2025-06-02 17:36:18.735763 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-02 17:36:18.735771 | orchestrator | Monday 02 June 2025 17:34:08 +0000 (0:00:01.582) 0:00:04.274 *********** 2025-06-02 17:36:18.735779 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.735787 | orchestrator | 2025-06-02 17:36:18.735796 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-02 17:36:18.735804 | orchestrator | Monday 02 June 2025 17:34:11 +0000 (0:00:03.160) 0:00:07.435 *********** 2025-06-02 17:36:18.735811 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-02 17:36:18.735820 | orchestrator | ok: [testbed-manager] 2025-06-02 17:36:18.735827 | orchestrator | 2025-06-02 17:36:18.735835 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-02 17:36:18.735843 | orchestrator | Monday 02 June 2025 17:35:20 +0000 (0:01:08.096) 0:01:15.532 *********** 2025-06-02 17:36:18.735876 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.735885 | orchestrator | 2025-06-02 17:36:18.735893 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:36:18.735901 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:36:18.735911 | orchestrator | 2025-06-02 17:36:18.735919 | orchestrator | 2025-06-02 17:36:18.735926 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:36:18.735934 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:23.843) 0:01:39.375 *********** 2025-06-02 17:36:18.735958 | orchestrator | =============================================================================== 2025-06-02 17:36:18.735966 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 68.10s 2025-06-02 17:36:18.735974 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 23.84s 2025-06-02 17:36:18.735982 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.16s 2025-06-02 17:36:18.735989 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.30s 2025-06-02 17:36:18.735997 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.58s 2025-06-02 17:36:18.736005 | orchestrator | 2025-06-02 17:36:18.736012 | orchestrator | 2025-06-02 17:36:18.736020 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-02 17:36:18.736028 | orchestrator | 2025-06-02 17:36:18.736035 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 17:36:18.736043 | orchestrator | Monday 02 June 2025 17:33:26 +0000 (0:00:00.360) 0:00:00.360 *********** 2025-06-02 17:36:18.736051 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:36:18.736060 | orchestrator | 2025-06-02 17:36:18.736068 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-02 17:36:18.736076 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:01.765) 0:00:02.125 *********** 2025-06-02 17:36:18.736084 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736092 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736100 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736108 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736115 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736123 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736132 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736141 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736153 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736164 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736173 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736182 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736191 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736200 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-02 17:36:18.736209 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736219 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736242 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-02 17:36:18.736252 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736262 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736271 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736280 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-02 17:36:18.736295 | orchestrator | 2025-06-02 17:36:18.736304 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-02 17:36:18.736313 | orchestrator | Monday 02 June 2025 17:33:33 +0000 (0:00:04.877) 0:00:07.003 *********** 2025-06-02 17:36:18.736322 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:36:18.736333 | orchestrator | 2025-06-02 17:36:18.736342 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-02 17:36:18.736351 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:01.245) 0:00:08.249 *********** 2025-06-02 17:36:18.736364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736377 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736408 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736423 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736434 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.736497 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736521 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736567 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736641 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.736651 | orchestrator | 2025-06-02 17:36:18.736659 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-02 17:36:18.736673 | orchestrator | Monday 02 June 2025 17:33:39 +0000 (0:00:05.075) 0:00:13.325 *********** 2025-06-02 17:36:18.736692 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736701 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736710 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736718 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:36:18.736726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736792 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:18.736800 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:18.736808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736832 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736840 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736865 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:18.736873 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:18.736881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736894 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:18.736918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736926 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736934 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736942 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:18.736949 | orchestrator | 2025-06-02 17:36:18.736956 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-02 17:36:18.736963 | orchestrator | Monday 02 June 2025 17:33:40 +0000 (0:00:01.189) 0:00:14.515 *********** 2025-06-02 17:36:18.736977 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.736984 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.736994 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737001 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:36:18.737008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:18.737039 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737062 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737102 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737120 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:18.737126 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:18.737133 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:18.737139 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737165 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737172 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:18.737179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-02 17:36:18.737186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737193 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.737200 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:18.737206 | orchestrator | 2025-06-02 17:36:18.737213 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-02 17:36:18.737220 | orchestrator | Monday 02 June 2025 17:33:44 +0000 (0:00:03.303) 0:00:17.818 *********** 2025-06-02 17:36:18.737231 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:36:18.737238 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:18.737244 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:18.737251 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:18.737257 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:18.737264 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:18.737270 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:18.737277 | orchestrator | 2025-06-02 17:36:18.737283 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-02 17:36:18.737290 | orchestrator | Monday 02 June 2025 17:33:46 +0000 (0:00:02.173) 0:00:19.991 *********** 2025-06-02 17:36:18.737296 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:36:18.737303 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:36:18.737309 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:36:18.737316 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:36:18.737322 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:36:18.737329 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:36:18.737335 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:36:18.737341 | orchestrator | 2025-06-02 17:36:18.737348 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-02 17:36:18.737354 | orchestrator | Monday 02 June 2025 17:33:48 +0000 (0:00:02.136) 0:00:22.128 *********** 2025-06-02 17:36:18.737361 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737371 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737385 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737411 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737425 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737435 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737442 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737460 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.737471 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737485 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737513 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737527 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737540 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.737547 | orchestrator | 2025-06-02 17:36:18.737554 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-02 17:36:18.737561 | orchestrator | Monday 02 June 2025 17:33:55 +0000 (0:00:07.414) 0:00:29.542 *********** 2025-06-02 17:36:18.737567 | orchestrator | [WARNING]: Skipped 2025-06-02 17:36:18.737574 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-02 17:36:18.737581 | orchestrator | to this access issue: 2025-06-02 17:36:18.737602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-02 17:36:18.737609 | orchestrator | directory 2025-06-02 17:36:18.737616 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:36:18.737622 | orchestrator | 2025-06-02 17:36:18.737629 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-02 17:36:18.737636 | orchestrator | Monday 02 June 2025 17:33:58 +0000 (0:00:02.772) 0:00:32.315 *********** 2025-06-02 17:36:18.737642 | orchestrator | [WARNING]: Skipped 2025-06-02 17:36:18.737649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-02 17:36:18.737655 | orchestrator | to this access issue: 2025-06-02 17:36:18.737662 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-02 17:36:18.737668 | orchestrator | directory 2025-06-02 17:36:18.737675 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:36:18.737681 | orchestrator | 2025-06-02 17:36:18.737688 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-02 17:36:18.737695 | orchestrator | Monday 02 June 2025 17:34:00 +0000 (0:00:01.723) 0:00:34.039 *********** 2025-06-02 17:36:18.737701 | orchestrator | [WARNING]: Skipped 2025-06-02 17:36:18.737708 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-02 17:36:18.737714 | orchestrator | to this access issue: 2025-06-02 17:36:18.737721 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-02 17:36:18.737727 | orchestrator | directory 2025-06-02 17:36:18.737733 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:36:18.737740 | orchestrator | 2025-06-02 17:36:18.737746 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-02 17:36:18.737753 | orchestrator | Monday 02 June 2025 17:34:01 +0000 (0:00:01.553) 0:00:35.592 *********** 2025-06-02 17:36:18.737759 | orchestrator | [WARNING]: Skipped 2025-06-02 17:36:18.737766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-02 17:36:18.737772 | orchestrator | to this access issue: 2025-06-02 17:36:18.737779 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-02 17:36:18.737785 | orchestrator | directory 2025-06-02 17:36:18.737792 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:36:18.737798 | orchestrator | 2025-06-02 17:36:18.737805 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-02 17:36:18.737811 | orchestrator | Monday 02 June 2025 17:34:03 +0000 (0:00:01.246) 0:00:36.838 *********** 2025-06-02 17:36:18.737822 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.737829 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.737835 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.737842 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.737848 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.737859 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.737866 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.737880 | orchestrator | 2025-06-02 17:36:18.737887 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-02 17:36:18.737893 | orchestrator | Monday 02 June 2025 17:34:12 +0000 (0:00:09.674) 0:00:46.513 *********** 2025-06-02 17:36:18.737900 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737907 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737913 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737931 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737937 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737944 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-02 17:36:18.737950 | orchestrator | 2025-06-02 17:36:18.737957 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-02 17:36:18.737963 | orchestrator | Monday 02 June 2025 17:34:17 +0000 (0:00:04.801) 0:00:51.314 *********** 2025-06-02 17:36:18.737970 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.737976 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.737983 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.737989 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.737995 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.738002 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.738008 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.738048 | orchestrator | 2025-06-02 17:36:18.738057 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-02 17:36:18.738064 | orchestrator | Monday 02 June 2025 17:34:21 +0000 (0:00:03.427) 0:00:54.742 *********** 2025-06-02 17:36:18.738071 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738079 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738086 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738111 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738126 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738134 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738141 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738161 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738193 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738204 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738211 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738218 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738232 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:36:18.738254 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738265 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738272 | orchestrator | 2025-06-02 17:36:18.738279 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-02 17:36:18.738285 | orchestrator | Monday 02 June 2025 17:34:24 +0000 (0:00:03.580) 0:00:58.322 *********** 2025-06-02 17:36:18.738292 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738299 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738316 | orchestrator | [0;2025-06-02 17:36:18 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:18.738323 | orchestrator | 2025-06-02 17:36:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:18.738329 | orchestrator | 33mchanged: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738342 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738349 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-02 17:36:18.738355 | orchestrator | 2025-06-02 17:36:18.738362 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-02 17:36:18.738368 | orchestrator | Monday 02 June 2025 17:34:28 +0000 (0:00:03.992) 0:01:02.314 *********** 2025-06-02 17:36:18.738375 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738382 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738388 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738395 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738401 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738408 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738414 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-02 17:36:18.738421 | orchestrator | 2025-06-02 17:36:18.738427 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-02 17:36:18.738439 | orchestrator | Monday 02 June 2025 17:34:34 +0000 (0:00:05.881) 0:01:08.196 *********** 2025-06-02 17:36:18.738445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738459 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738488 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738495 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738520 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738530 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-02 17:36:18.738542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738563 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738604 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738614 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738621 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738634 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:36:18.738641 | orchestrator | 2025-06-02 17:36:18.738647 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-02 17:36:18.738654 | orchestrator | Monday 02 June 2025 17:34:39 +0000 (0:00:04.963) 0:01:13.159 *********** 2025-06-02 17:36:18.738661 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.738668 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.738674 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.738681 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.738687 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.738694 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.738701 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.738712 | orchestrator | 2025-06-02 17:36:18.738718 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-02 17:36:18.738725 | orchestrator | Monday 02 June 2025 17:34:41 +0000 (0:00:02.409) 0:01:15.569 *********** 2025-06-02 17:36:18.738731 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.738738 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.738744 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.738751 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.738757 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.738764 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.738770 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.738777 | orchestrator | 2025-06-02 17:36:18.738783 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738790 | orchestrator | Monday 02 June 2025 17:34:43 +0000 (0:00:01.639) 0:01:17.208 *********** 2025-06-02 17:36:18.738796 | orchestrator | 2025-06-02 17:36:18.738803 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738809 | orchestrator | Monday 02 June 2025 17:34:43 +0000 (0:00:00.353) 0:01:17.562 *********** 2025-06-02 17:36:18.738816 | orchestrator | 2025-06-02 17:36:18.738823 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738829 | orchestrator | Monday 02 June 2025 17:34:43 +0000 (0:00:00.119) 0:01:17.681 *********** 2025-06-02 17:36:18.738835 | orchestrator | 2025-06-02 17:36:18.738842 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738848 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:00.104) 0:01:17.786 *********** 2025-06-02 17:36:18.738855 | orchestrator | 2025-06-02 17:36:18.738862 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738868 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:00.108) 0:01:17.894 *********** 2025-06-02 17:36:18.738874 | orchestrator | 2025-06-02 17:36:18.738881 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738887 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:00.095) 0:01:17.989 *********** 2025-06-02 17:36:18.738894 | orchestrator | 2025-06-02 17:36:18.738900 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-02 17:36:18.738907 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:00.070) 0:01:18.059 *********** 2025-06-02 17:36:18.738913 | orchestrator | 2025-06-02 17:36:18.738920 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-02 17:36:18.738926 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:00.103) 0:01:18.163 *********** 2025-06-02 17:36:18.738933 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.738939 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.738946 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.738952 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.738959 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.738965 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.738972 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.738978 | orchestrator | 2025-06-02 17:36:18.738985 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-02 17:36:18.738991 | orchestrator | Monday 02 June 2025 17:35:24 +0000 (0:00:40.169) 0:01:58.332 *********** 2025-06-02 17:36:18.738998 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.739004 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.739011 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.739017 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.739024 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.739030 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.739036 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.739043 | orchestrator | 2025-06-02 17:36:18.739049 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-02 17:36:18.739056 | orchestrator | Monday 02 June 2025 17:36:04 +0000 (0:00:39.420) 0:02:37.753 *********** 2025-06-02 17:36:18.739067 | orchestrator | ok: [testbed-manager] 2025-06-02 17:36:18.739073 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:36:18.739080 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:36:18.739086 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:36:18.739093 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:36:18.739099 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:36:18.739109 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:36:18.739115 | orchestrator | 2025-06-02 17:36:18.739122 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-02 17:36:18.739129 | orchestrator | Monday 02 June 2025 17:36:06 +0000 (0:00:02.322) 0:02:40.075 *********** 2025-06-02 17:36:18.739135 | orchestrator | changed: [testbed-manager] 2025-06-02 17:36:18.739141 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:36:18.739148 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:36:18.739154 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:36:18.739161 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:36:18.739167 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:36:18.739174 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:36:18.739180 | orchestrator | 2025-06-02 17:36:18.739187 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:36:18.739194 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739204 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739211 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739218 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739225 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739231 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739238 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-02 17:36:18.739244 | orchestrator | 2025-06-02 17:36:18.739251 | orchestrator | 2025-06-02 17:36:18.739258 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:36:18.739264 | orchestrator | Monday 02 June 2025 17:36:15 +0000 (0:00:09.329) 0:02:49.405 *********** 2025-06-02 17:36:18.739271 | orchestrator | =============================================================================== 2025-06-02 17:36:18.739277 | orchestrator | common : Restart fluentd container ------------------------------------- 40.17s 2025-06-02 17:36:18.739284 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 39.42s 2025-06-02 17:36:18.739290 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 9.67s 2025-06-02 17:36:18.739297 | orchestrator | common : Restart cron container ----------------------------------------- 9.33s 2025-06-02 17:36:18.739303 | orchestrator | common : Copying over config.json files for services -------------------- 7.41s 2025-06-02 17:36:18.739310 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 5.88s 2025-06-02 17:36:18.739316 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.08s 2025-06-02 17:36:18.739323 | orchestrator | common : Check common containers ---------------------------------------- 4.96s 2025-06-02 17:36:18.739330 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.88s 2025-06-02 17:36:18.739336 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.80s 2025-06-02 17:36:18.739349 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.99s 2025-06-02 17:36:18.739355 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.58s 2025-06-02 17:36:18.739362 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.43s 2025-06-02 17:36:18.739368 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.30s 2025-06-02 17:36:18.739375 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.77s 2025-06-02 17:36:18.739381 | orchestrator | common : Creating log volume -------------------------------------------- 2.41s 2025-06-02 17:36:18.739387 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.32s 2025-06-02 17:36:18.739394 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 2.17s 2025-06-02 17:36:18.739400 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 2.14s 2025-06-02 17:36:18.739407 | orchestrator | common : include_tasks -------------------------------------------------- 1.77s 2025-06-02 17:36:21.774512 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:21.775970 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:21.781579 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:21.782292 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:21.786493 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:21.792641 | orchestrator | 2025-06-02 17:36:21 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:21.792718 | orchestrator | 2025-06-02 17:36:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:24.853563 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:24.853741 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:24.853758 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:24.853770 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:24.855170 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:24.855212 | orchestrator | 2025-06-02 17:36:24 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:24.855224 | orchestrator | 2025-06-02 17:36:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:27.900641 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:27.905262 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:27.905345 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:27.905360 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:27.906363 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:27.910360 | orchestrator | 2025-06-02 17:36:27 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:27.910459 | orchestrator | 2025-06-02 17:36:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:30.951285 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:30.951391 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:30.951407 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:30.951418 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:30.951701 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:30.957034 | orchestrator | 2025-06-02 17:36:30 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:30.957118 | orchestrator | 2025-06-02 17:36:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:33.988890 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:33.990006 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:33.990575 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:33.991247 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:33.993993 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:33.994103 | orchestrator | 2025-06-02 17:36:33 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:33.994129 | orchestrator | 2025-06-02 17:36:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:37.070651 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:37.072881 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:37.074322 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:37.078269 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state STARTED 2025-06-02 17:36:37.078309 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:37.080828 | orchestrator | 2025-06-02 17:36:37 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:37.080903 | orchestrator | 2025-06-02 17:36:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:40.115981 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:40.117775 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:40.118472 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:40.119296 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task aecb6b9e-c762-4396-a73f-81a49cbcfd88 is in state SUCCESS 2025-06-02 17:36:40.120248 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:40.121848 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:40.123121 | orchestrator | 2025-06-02 17:36:40 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:40.123164 | orchestrator | 2025-06-02 17:36:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:43.181518 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:43.182500 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:43.188809 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:43.190192 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:43.192181 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:43.193398 | orchestrator | 2025-06-02 17:36:43 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:43.193441 | orchestrator | 2025-06-02 17:36:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:46.279405 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:46.279514 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:46.280285 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:46.283982 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:46.284756 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:46.285763 | orchestrator | 2025-06-02 17:36:46 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:46.285795 | orchestrator | 2025-06-02 17:36:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:49.324722 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:49.325559 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:49.327507 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:49.329698 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:49.331618 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:49.333863 | orchestrator | 2025-06-02 17:36:49 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:49.333899 | orchestrator | 2025-06-02 17:36:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:52.384938 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:52.385190 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:52.388999 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:52.389668 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:52.390338 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:52.392193 | orchestrator | 2025-06-02 17:36:52 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:52.392280 | orchestrator | 2025-06-02 17:36:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:55.428838 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:55.429717 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:55.430630 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:55.431558 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:55.432759 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:55.435027 | orchestrator | 2025-06-02 17:36:55 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:55.435085 | orchestrator | 2025-06-02 17:36:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:36:58.484364 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state STARTED 2025-06-02 17:36:58.485848 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:36:58.486479 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:36:58.487419 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:36:58.489145 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:36:58.489892 | orchestrator | 2025-06-02 17:36:58 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:36:58.489940 | orchestrator | 2025-06-02 17:36:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:01.524225 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task fd1ce144-2550-4cba-8be8-0333abe9151c is in state SUCCESS 2025-06-02 17:37:01.525722 | orchestrator | 2025-06-02 17:37:01.525757 | orchestrator | 2025-06-02 17:37:01.525765 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:37:01.525773 | orchestrator | 2025-06-02 17:37:01.525781 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:37:01.525793 | orchestrator | Monday 02 June 2025 17:36:24 +0000 (0:00:00.318) 0:00:00.318 *********** 2025-06-02 17:37:01.525804 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:37:01.525816 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:37:01.525825 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:37:01.525835 | orchestrator | 2025-06-02 17:37:01.525844 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:37:01.525854 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.719) 0:00:01.038 *********** 2025-06-02 17:37:01.525865 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-02 17:37:01.525875 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-02 17:37:01.525886 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-02 17:37:01.525896 | orchestrator | 2025-06-02 17:37:01.525907 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-02 17:37:01.525917 | orchestrator | 2025-06-02 17:37:01.525927 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-02 17:37:01.525937 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.836) 0:00:01.874 *********** 2025-06-02 17:37:01.525949 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:37:01.525988 | orchestrator | 2025-06-02 17:37:01.525999 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-02 17:37:01.526010 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:01.470) 0:00:03.344 *********** 2025-06-02 17:37:01.526105 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 17:37:01.526118 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 17:37:01.526128 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 17:37:01.526137 | orchestrator | 2025-06-02 17:37:01.526149 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-02 17:37:01.526159 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:01.400) 0:00:04.745 *********** 2025-06-02 17:37:01.526169 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-02 17:37:01.526179 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-02 17:37:01.526189 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-02 17:37:01.526200 | orchestrator | 2025-06-02 17:37:01.526210 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-02 17:37:01.526219 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:03.003) 0:00:07.749 *********** 2025-06-02 17:37:01.526229 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:01.526240 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:01.526250 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:01.526260 | orchestrator | 2025-06-02 17:37:01.526285 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-02 17:37:01.526297 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:03.174) 0:00:10.923 *********** 2025-06-02 17:37:01.526307 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:01.526317 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:01.526327 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:01.526338 | orchestrator | 2025-06-02 17:37:01.526349 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:37:01.526360 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.526373 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.526384 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.526393 | orchestrator | 2025-06-02 17:37:01.526404 | orchestrator | 2025-06-02 17:37:01.526415 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:37:01.526424 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:03.390) 0:00:14.314 *********** 2025-06-02 17:37:01.526434 | orchestrator | =============================================================================== 2025-06-02 17:37:01.526445 | orchestrator | memcached : Restart memcached container --------------------------------- 3.39s 2025-06-02 17:37:01.526455 | orchestrator | memcached : Check memcached container ----------------------------------- 3.17s 2025-06-02 17:37:01.526465 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.00s 2025-06-02 17:37:01.526475 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.47s 2025-06-02 17:37:01.526485 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.40s 2025-06-02 17:37:01.526495 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-06-02 17:37:01.526505 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-06-02 17:37:01.526516 | orchestrator | 2025-06-02 17:37:01.526526 | orchestrator | 2025-06-02 17:37:01.526536 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:37:01.526546 | orchestrator | 2025-06-02 17:37:01.526557 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:37:01.526568 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.653) 0:00:00.654 *********** 2025-06-02 17:37:01.526668 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:37:01.526685 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:37:01.526696 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:37:01.526706 | orchestrator | 2025-06-02 17:37:01.526717 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:37:01.526749 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.716) 0:00:01.370 *********** 2025-06-02 17:37:01.526760 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-02 17:37:01.526769 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-02 17:37:01.526775 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-02 17:37:01.526782 | orchestrator | 2025-06-02 17:37:01.526788 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-02 17:37:01.526794 | orchestrator | 2025-06-02 17:37:01.526800 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-02 17:37:01.526807 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:01.219) 0:00:02.590 *********** 2025-06-02 17:37:01.526813 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:37:01.526820 | orchestrator | 2025-06-02 17:37:01.526826 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-02 17:37:01.526832 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:01.570) 0:00:04.160 *********** 2025-06-02 17:37:01.526843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526909 | orchestrator | 2025-06-02 17:37:01.526915 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-02 17:37:01.526922 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:01.671) 0:00:05.831 *********** 2025-06-02 17:37:01.526928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526975 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.526982 | orchestrator | 2025-06-02 17:37:01.526988 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-02 17:37:01.526994 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:03.660) 0:00:09.491 *********** 2025-06-02 17:37:01.527001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.527007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.527014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528681 | orchestrator | 2025-06-02 17:37:01.528709 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-02 17:37:01.528722 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:03.753) 0:00:13.245 *********** 2025-06-02 17:37:01.528735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-02 17:37:01.528844 | orchestrator | 2025-06-02 17:37:01.528854 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:37:01.528864 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:02.097) 0:00:15.343 *********** 2025-06-02 17:37:01.528874 | orchestrator | 2025-06-02 17:37:01.528884 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:37:01.528902 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:00.117) 0:00:15.461 *********** 2025-06-02 17:37:01.528912 | orchestrator | 2025-06-02 17:37:01.528922 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-02 17:37:01.528932 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:00.078) 0:00:15.540 *********** 2025-06-02 17:37:01.528942 | orchestrator | 2025-06-02 17:37:01.528951 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-02 17:37:01.528961 | orchestrator | Monday 02 June 2025 17:36:40 +0000 (0:00:00.077) 0:00:15.617 *********** 2025-06-02 17:37:01.528971 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:01.528981 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:01.528991 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:01.529001 | orchestrator | 2025-06-02 17:37:01.529012 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-02 17:37:01.529022 | orchestrator | Monday 02 June 2025 17:36:50 +0000 (0:00:10.050) 0:00:25.668 *********** 2025-06-02 17:37:01.529032 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:01.529041 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:01.529052 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:01.529062 | orchestrator | 2025-06-02 17:37:01.529077 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:37:01.529088 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.529100 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.529111 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:37:01.529121 | orchestrator | 2025-06-02 17:37:01.529131 | orchestrator | 2025-06-02 17:37:01.529142 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:37:01.529152 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:09.516) 0:00:35.184 *********** 2025-06-02 17:37:01.529182 | orchestrator | =============================================================================== 2025-06-02 17:37:01.529189 | orchestrator | redis : Restart redis container ---------------------------------------- 10.05s 2025-06-02 17:37:01.529196 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.52s 2025-06-02 17:37:01.529202 | orchestrator | redis : Copying over redis config files --------------------------------- 3.75s 2025-06-02 17:37:01.529208 | orchestrator | redis : Copying over default config.json files -------------------------- 3.66s 2025-06-02 17:37:01.529214 | orchestrator | redis : Check redis containers ------------------------------------------ 2.10s 2025-06-02 17:37:01.529223 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.67s 2025-06-02 17:37:01.529240 | orchestrator | redis : include_tasks --------------------------------------------------- 1.57s 2025-06-02 17:37:01.529250 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.22s 2025-06-02 17:37:01.529260 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.72s 2025-06-02 17:37:01.529270 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.27s 2025-06-02 17:37:01.529441 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:01.529452 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:01.529459 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:01.529465 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:01.529869 | orchestrator | 2025-06-02 17:37:01 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:01.530205 | orchestrator | 2025-06-02 17:37:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:04.574728 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:04.574926 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:04.576072 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:04.577966 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:04.580039 | orchestrator | 2025-06-02 17:37:04 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:04.580084 | orchestrator | 2025-06-02 17:37:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:07.646952 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:07.648711 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:07.652089 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:07.652847 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:07.653533 | orchestrator | 2025-06-02 17:37:07 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:07.653703 | orchestrator | 2025-06-02 17:37:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:10.711716 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:10.712603 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:10.715295 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:10.720867 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:10.721790 | orchestrator | 2025-06-02 17:37:10 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:10.722070 | orchestrator | 2025-06-02 17:37:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:13.774293 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:13.775557 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:13.776474 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:13.777620 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:13.779283 | orchestrator | 2025-06-02 17:37:13 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:13.779330 | orchestrator | 2025-06-02 17:37:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:16.817679 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:16.818096 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:16.819200 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:16.829232 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:16.829561 | orchestrator | 2025-06-02 17:37:16 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:16.829602 | orchestrator | 2025-06-02 17:37:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:19.877450 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:19.879363 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:19.880940 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:19.882725 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:19.884437 | orchestrator | 2025-06-02 17:37:19 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:19.884489 | orchestrator | 2025-06-02 17:37:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:22.925719 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:22.927275 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:22.929921 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:22.933535 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:22.936050 | orchestrator | 2025-06-02 17:37:22 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:22.936832 | orchestrator | 2025-06-02 17:37:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:25.981173 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:25.985113 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:25.985173 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:25.986204 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:25.987633 | orchestrator | 2025-06-02 17:37:25 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:25.987667 | orchestrator | 2025-06-02 17:37:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:29.039988 | orchestrator | 2025-06-02 17:37:29 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:29.040190 | orchestrator | 2025-06-02 17:37:29 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:29.043950 | orchestrator | 2025-06-02 17:37:29 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:29.048259 | orchestrator | 2025-06-02 17:37:29 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:29.049222 | orchestrator | 2025-06-02 17:37:29 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:29.049263 | orchestrator | 2025-06-02 17:37:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:32.085250 | orchestrator | 2025-06-02 17:37:32 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:32.088375 | orchestrator | 2025-06-02 17:37:32 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:32.091556 | orchestrator | 2025-06-02 17:37:32 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:32.092754 | orchestrator | 2025-06-02 17:37:32 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:32.094854 | orchestrator | 2025-06-02 17:37:32 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:32.094879 | orchestrator | 2025-06-02 17:37:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:35.150226 | orchestrator | 2025-06-02 17:37:35 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:35.151021 | orchestrator | 2025-06-02 17:37:35 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:35.151600 | orchestrator | 2025-06-02 17:37:35 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:35.152713 | orchestrator | 2025-06-02 17:37:35 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:35.153864 | orchestrator | 2025-06-02 17:37:35 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state STARTED 2025-06-02 17:37:35.153900 | orchestrator | 2025-06-02 17:37:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:38.193008 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:38.193198 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:38.193721 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:38.195143 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:38.197349 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:38.198721 | orchestrator | 2025-06-02 17:37:38 | INFO  | Task 194c659c-b44a-4336-bb69-f498b62d0ec9 is in state SUCCESS 2025-06-02 17:37:38.204211 | orchestrator | 2025-06-02 17:37:38.204271 | orchestrator | 2025-06-02 17:37:38.204283 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:37:38.204296 | orchestrator | 2025-06-02 17:37:38.204307 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:37:38.204317 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.640) 0:00:00.640 *********** 2025-06-02 17:37:38.204327 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:37:38.204338 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:37:38.204348 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:37:38.204358 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:37:38.204367 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:37:38.204377 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:37:38.204386 | orchestrator | 2025-06-02 17:37:38.204396 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:37:38.204406 | orchestrator | Monday 02 June 2025 17:36:26 +0000 (0:00:01.246) 0:00:01.886 *********** 2025-06-02 17:37:38.204416 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204426 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204436 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204445 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204455 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204466 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-02 17:37:38.204476 | orchestrator | 2025-06-02 17:37:38.204486 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-02 17:37:38.204495 | orchestrator | 2025-06-02 17:37:38.204505 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-02 17:37:38.204515 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:01.448) 0:00:03.335 *********** 2025-06-02 17:37:38.204526 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:37:38.204538 | orchestrator | 2025-06-02 17:37:38.204547 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:37:38.204557 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:02.408) 0:00:05.743 *********** 2025-06-02 17:37:38.204595 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 17:37:38.204611 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 17:37:38.204626 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 17:37:38.204642 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 17:37:38.204660 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 17:37:38.204675 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 17:37:38.204691 | orchestrator | 2025-06-02 17:37:38.204702 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:37:38.204711 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:02.501) 0:00:08.245 *********** 2025-06-02 17:37:38.204721 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-02 17:37:38.204730 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-02 17:37:38.204740 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-02 17:37:38.204749 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-02 17:37:38.204759 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-02 17:37:38.204768 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-02 17:37:38.204797 | orchestrator | 2025-06-02 17:37:38.204809 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:37:38.204820 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:02.284) 0:00:10.529 *********** 2025-06-02 17:37:38.204831 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-02 17:37:38.204842 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:37:38.204853 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-02 17:37:38.204872 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:37:38.204883 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-02 17:37:38.204894 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:37:38.204905 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-02 17:37:38.204916 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:37:38.204926 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-02 17:37:38.204937 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:37:38.204947 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-02 17:37:38.204958 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:37:38.204968 | orchestrator | 2025-06-02 17:37:38.204979 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-02 17:37:38.204991 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:02.176) 0:00:12.706 *********** 2025-06-02 17:37:38.205001 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:37:38.205012 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:37:38.205023 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:37:38.205033 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:37:38.205043 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:37:38.205054 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:37:38.205065 | orchestrator | 2025-06-02 17:37:38.205076 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-02 17:37:38.205086 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:01.408) 0:00:14.114 *********** 2025-06-02 17:37:38.205118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205192 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205219 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205229 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205245 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205275 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205285 | orchestrator | 2025-06-02 17:37:38.205295 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-02 17:37:38.205305 | orchestrator | Monday 02 June 2025 17:36:42 +0000 (0:00:03.769) 0:00:17.884 *********** 2025-06-02 17:37:38.205315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205351 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205460 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205474 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205485 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205512 | orchestrator | 2025-06-02 17:37:38.205522 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-02 17:37:38.205532 | orchestrator | Monday 02 June 2025 17:36:47 +0000 (0:00:04.719) 0:00:22.604 *********** 2025-06-02 17:37:38.205542 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:37:38.205551 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:37:38.205561 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:37:38.205593 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:37:38.205602 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:37:38.205618 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:37:38.205628 | orchestrator | 2025-06-02 17:37:38.205638 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-02 17:37:38.205647 | orchestrator | Monday 02 June 2025 17:36:48 +0000 (0:00:01.322) 0:00:23.926 *********** 2025-06-02 17:37:38.205657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205692 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205760 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205786 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-02 17:37:38.205816 | orchestrator | 2025-06-02 17:37:38.205826 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205836 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:03.612) 0:00:27.539 *********** 2025-06-02 17:37:38.205845 | orchestrator | 2025-06-02 17:37:38.205855 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205865 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:00.193) 0:00:27.732 *********** 2025-06-02 17:37:38.205875 | orchestrator | 2025-06-02 17:37:38.205884 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205894 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.334) 0:00:28.066 *********** 2025-06-02 17:37:38.205903 | orchestrator | 2025-06-02 17:37:38.205913 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205922 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.261) 0:00:28.328 *********** 2025-06-02 17:37:38.205932 | orchestrator | 2025-06-02 17:37:38.205941 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205951 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.166) 0:00:28.495 *********** 2025-06-02 17:37:38.205961 | orchestrator | 2025-06-02 17:37:38.205970 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-02 17:37:38.205980 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.137) 0:00:28.633 *********** 2025-06-02 17:37:38.205989 | orchestrator | 2025-06-02 17:37:38.205999 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-02 17:37:38.206008 | orchestrator | Monday 02 June 2025 17:36:54 +0000 (0:00:00.435) 0:00:29.068 *********** 2025-06-02 17:37:38.206068 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:38.206079 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:38.206088 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:37:38.206098 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:37:38.206107 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:37:38.206117 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:38.206127 | orchestrator | 2025-06-02 17:37:38.206136 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-02 17:37:38.206146 | orchestrator | Monday 02 June 2025 17:37:04 +0000 (0:00:10.603) 0:00:39.671 *********** 2025-06-02 17:37:38.206155 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:37:38.206165 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:37:38.206174 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:37:38.206184 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:37:38.206193 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:37:38.206203 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:37:38.206212 | orchestrator | 2025-06-02 17:37:38.206226 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 17:37:38.206236 | orchestrator | Monday 02 June 2025 17:37:07 +0000 (0:00:02.609) 0:00:42.280 *********** 2025-06-02 17:37:38.206246 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:38.206255 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:38.206265 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:38.206281 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:37:38.206290 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:37:38.206299 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:37:38.206309 | orchestrator | 2025-06-02 17:37:38.206318 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-02 17:37:38.206328 | orchestrator | Monday 02 June 2025 17:37:12 +0000 (0:00:05.258) 0:00:47.539 *********** 2025-06-02 17:37:38.206337 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-02 17:37:38.206347 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-02 17:37:38.206357 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-02 17:37:38.206367 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-02 17:37:38.206376 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-02 17:37:38.206392 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-02 17:37:38.206402 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-02 17:37:38.206411 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-02 17:37:38.206421 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-02 17:37:38.206430 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-02 17:37:38.206440 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-02 17:37:38.206449 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-02 17:37:38.206458 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206468 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206477 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206486 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206496 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206505 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-02 17:37:38.206515 | orchestrator | 2025-06-02 17:37:38.206525 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-02 17:37:38.206534 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:08.575) 0:00:56.114 *********** 2025-06-02 17:37:38.206545 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-02 17:37:38.206554 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:37:38.206625 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-02 17:37:38.206638 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:37:38.206648 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-02 17:37:38.206658 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:37:38.206667 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-02 17:37:38.206677 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-02 17:37:38.206686 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-02 17:37:38.206708 | orchestrator | 2025-06-02 17:37:38.206718 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-02 17:37:38.206728 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:02.562) 0:00:58.677 *********** 2025-06-02 17:37:38.206738 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:37:38.206747 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:37:38.206757 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:37:38.206766 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:37:38.206776 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-02 17:37:38.206785 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:37:38.206795 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:37:38.206804 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:37:38.206814 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-02 17:37:38.206823 | orchestrator | 2025-06-02 17:37:38.206833 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-02 17:37:38.206847 | orchestrator | Monday 02 June 2025 17:37:27 +0000 (0:00:03.677) 0:01:02.354 *********** 2025-06-02 17:37:38.206857 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:37:38.206866 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:37:38.206876 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:37:38.206885 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:37:38.206894 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:37:38.206904 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:37:38.206913 | orchestrator | 2025-06-02 17:37:38.206923 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:37:38.206932 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:37:38.206944 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:37:38.206953 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:37:38.206963 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:37:38.206974 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:37:38.206999 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:37:38.207015 | orchestrator | 2025-06-02 17:37:38.207041 | orchestrator | 2025-06-02 17:37:38.207059 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:37:38.207073 | orchestrator | Monday 02 June 2025 17:37:35 +0000 (0:00:08.518) 0:01:10.873 *********** 2025-06-02 17:37:38.207088 | orchestrator | =============================================================================== 2025-06-02 17:37:38.207144 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.78s 2025-06-02 17:37:38.207155 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.60s 2025-06-02 17:37:38.207166 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.58s 2025-06-02 17:37:38.207177 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.72s 2025-06-02 17:37:38.207188 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 3.77s 2025-06-02 17:37:38.207200 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.68s 2025-06-02 17:37:38.207211 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.61s 2025-06-02 17:37:38.207234 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.61s 2025-06-02 17:37:38.207246 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.56s 2025-06-02 17:37:38.207258 | orchestrator | module-load : Load modules ---------------------------------------------- 2.50s 2025-06-02 17:37:38.207270 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.41s 2025-06-02 17:37:38.207281 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.28s 2025-06-02 17:37:38.207294 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.18s 2025-06-02 17:37:38.207306 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.53s 2025-06-02 17:37:38.207318 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2025-06-02 17:37:38.207330 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.41s 2025-06-02 17:37:38.207342 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.32s 2025-06-02 17:37:38.207354 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.25s 2025-06-02 17:37:38.207368 | orchestrator | 2025-06-02 17:37:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:41.236284 | orchestrator | 2025-06-02 17:37:41 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:41.238266 | orchestrator | 2025-06-02 17:37:41 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:41.238305 | orchestrator | 2025-06-02 17:37:41 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:41.239455 | orchestrator | 2025-06-02 17:37:41 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:41.241068 | orchestrator | 2025-06-02 17:37:41 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:41.241116 | orchestrator | 2025-06-02 17:37:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:44.282680 | orchestrator | 2025-06-02 17:37:44 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:44.287985 | orchestrator | 2025-06-02 17:37:44 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:44.292510 | orchestrator | 2025-06-02 17:37:44 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:44.292981 | orchestrator | 2025-06-02 17:37:44 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:44.293899 | orchestrator | 2025-06-02 17:37:44 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:44.293926 | orchestrator | 2025-06-02 17:37:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:47.340846 | orchestrator | 2025-06-02 17:37:47 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:47.341709 | orchestrator | 2025-06-02 17:37:47 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:47.343191 | orchestrator | 2025-06-02 17:37:47 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:47.345266 | orchestrator | 2025-06-02 17:37:47 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:47.346125 | orchestrator | 2025-06-02 17:37:47 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:47.346151 | orchestrator | 2025-06-02 17:37:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:50.394786 | orchestrator | 2025-06-02 17:37:50 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:50.396391 | orchestrator | 2025-06-02 17:37:50 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:50.397442 | orchestrator | 2025-06-02 17:37:50 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:50.400692 | orchestrator | 2025-06-02 17:37:50 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:50.401492 | orchestrator | 2025-06-02 17:37:50 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:50.401523 | orchestrator | 2025-06-02 17:37:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:53.447143 | orchestrator | 2025-06-02 17:37:53 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:53.457201 | orchestrator | 2025-06-02 17:37:53 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:53.457288 | orchestrator | 2025-06-02 17:37:53 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:53.464792 | orchestrator | 2025-06-02 17:37:53 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:53.464916 | orchestrator | 2025-06-02 17:37:53 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:53.464935 | orchestrator | 2025-06-02 17:37:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:56.517706 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:56.518105 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:56.520241 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:56.522168 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:56.526308 | orchestrator | 2025-06-02 17:37:56 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:56.526370 | orchestrator | 2025-06-02 17:37:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:37:59.577681 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:37:59.578852 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:37:59.579245 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:37:59.580025 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:37:59.581938 | orchestrator | 2025-06-02 17:37:59 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:37:59.582841 | orchestrator | 2025-06-02 17:37:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:02.627153 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:38:02.627743 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:02.629464 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:02.631755 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:02.632748 | orchestrator | 2025-06-02 17:38:02 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:02.632870 | orchestrator | 2025-06-02 17:38:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:05.665354 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:38:05.665475 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:05.666154 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:05.666848 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:05.667215 | orchestrator | 2025-06-02 17:38:05 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:05.667224 | orchestrator | 2025-06-02 17:38:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:08.698269 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:38:08.698439 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:08.699018 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:08.700029 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:08.701018 | orchestrator | 2025-06-02 17:38:08 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:08.701480 | orchestrator | 2025-06-02 17:38:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:11.735929 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state STARTED 2025-06-02 17:38:11.736160 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:11.736180 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:11.736192 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:11.737942 | orchestrator | 2025-06-02 17:38:11 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:11.737977 | orchestrator | 2025-06-02 17:38:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:14.789779 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task f4a70a40-ebc7-46a0-8ab9-9548c9742185 is in state SUCCESS 2025-06-02 17:38:14.792253 | orchestrator | 2025-06-02 17:38:14.792339 | orchestrator | 2025-06-02 17:38:14.792354 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-02 17:38:14.792368 | orchestrator | 2025-06-02 17:38:14.792382 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-02 17:38:14.792395 | orchestrator | Monday 02 June 2025 17:33:27 +0000 (0:00:00.257) 0:00:00.257 *********** 2025-06-02 17:38:14.792409 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.792445 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.792459 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.792470 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.792479 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.792486 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.792494 | orchestrator | 2025-06-02 17:38:14.792503 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-02 17:38:14.792511 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:00.751) 0:00:01.009 *********** 2025-06-02 17:38:14.792519 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.792528 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.792536 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.792592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.792602 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.792610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.792617 | orchestrator | 2025-06-02 17:38:14.792625 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-02 17:38:14.792633 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:00.774) 0:00:01.784 *********** 2025-06-02 17:38:14.792641 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.792649 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.792657 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.792664 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.792672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.792680 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.792687 | orchestrator | 2025-06-02 17:38:14.792695 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-02 17:38:14.792703 | orchestrator | Monday 02 June 2025 17:33:29 +0000 (0:00:00.933) 0:00:02.718 *********** 2025-06-02 17:38:14.792725 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.792733 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.792742 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.792749 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.792757 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.792765 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.792773 | orchestrator | 2025-06-02 17:38:14.792780 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-02 17:38:14.792788 | orchestrator | Monday 02 June 2025 17:33:31 +0000 (0:00:02.156) 0:00:04.874 *********** 2025-06-02 17:38:14.792796 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.792805 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.792814 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.792823 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.792831 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.792840 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.792849 | orchestrator | 2025-06-02 17:38:14.792858 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-02 17:38:14.792867 | orchestrator | Monday 02 June 2025 17:33:33 +0000 (0:00:01.281) 0:00:06.155 *********** 2025-06-02 17:38:14.792876 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.792884 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.792893 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.792902 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.792911 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.792919 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.792928 | orchestrator | 2025-06-02 17:38:14.792937 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-02 17:38:14.792946 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:01.172) 0:00:07.328 *********** 2025-06-02 17:38:14.792955 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.792964 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.792973 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.792982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.792991 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.792999 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793008 | orchestrator | 2025-06-02 17:38:14.793017 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-02 17:38:14.793027 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:00.694) 0:00:08.022 *********** 2025-06-02 17:38:14.793035 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793044 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793053 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793061 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793070 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793079 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793107 | orchestrator | 2025-06-02 17:38:14.793116 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-02 17:38:14.793125 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:00.607) 0:00:08.629 *********** 2025-06-02 17:38:14.793134 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793143 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793152 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793161 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793170 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793178 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793186 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793193 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793201 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793209 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793233 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793242 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793250 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793258 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793273 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:38:14.793281 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:38:14.793288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793296 | orchestrator | 2025-06-02 17:38:14.793304 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-02 17:38:14.793312 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:00.818) 0:00:09.448 *********** 2025-06-02 17:38:14.793319 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793327 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793335 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793342 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793350 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793358 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793365 | orchestrator | 2025-06-02 17:38:14.793373 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-02 17:38:14.793382 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:01.224) 0:00:10.673 *********** 2025-06-02 17:38:14.793389 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.793397 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.793405 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.793413 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.793420 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.793428 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.793436 | orchestrator | 2025-06-02 17:38:14.793443 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-02 17:38:14.793451 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:00.872) 0:00:11.545 *********** 2025-06-02 17:38:14.793463 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.793471 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.793479 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.793486 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.793494 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.793502 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.793510 | orchestrator | 2025-06-02 17:38:14.793518 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-02 17:38:14.793598 | orchestrator | Monday 02 June 2025 17:33:45 +0000 (0:00:07.155) 0:00:18.700 *********** 2025-06-02 17:38:14.793610 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793618 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793626 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793633 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793641 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793656 | orchestrator | 2025-06-02 17:38:14.793664 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-02 17:38:14.793672 | orchestrator | Monday 02 June 2025 17:33:47 +0000 (0:00:01.835) 0:00:20.536 *********** 2025-06-02 17:38:14.793680 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793687 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793695 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793703 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793710 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793718 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793725 | orchestrator | 2025-06-02 17:38:14.793733 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-02 17:38:14.793743 | orchestrator | Monday 02 June 2025 17:33:49 +0000 (0:00:02.269) 0:00:22.805 *********** 2025-06-02 17:38:14.793750 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793758 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793766 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793773 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793781 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793794 | orchestrator | 2025-06-02 17:38:14.793802 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-02 17:38:14.793814 | orchestrator | Monday 02 June 2025 17:33:51 +0000 (0:00:01.426) 0:00:24.232 *********** 2025-06-02 17:38:14.793821 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-02 17:38:14.793828 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-02 17:38:14.793835 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793842 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-02 17:38:14.793848 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-02 17:38:14.793855 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793861 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-02 17:38:14.793868 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-02 17:38:14.793874 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793881 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-02 17:38:14.793887 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-02 17:38:14.793894 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793900 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-02 17:38:14.793906 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-02 17:38:14.793913 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.793919 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-02 17:38:14.793926 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-02 17:38:14.793932 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.793939 | orchestrator | 2025-06-02 17:38:14.793946 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-02 17:38:14.793958 | orchestrator | Monday 02 June 2025 17:33:53 +0000 (0:00:01.691) 0:00:25.924 *********** 2025-06-02 17:38:14.793965 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.793971 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.793978 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.793984 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.793997 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794004 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794010 | orchestrator | 2025-06-02 17:38:14.794044 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-02 17:38:14.794051 | orchestrator | 2025-06-02 17:38:14.794058 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-02 17:38:14.794065 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:01.637) 0:00:27.561 *********** 2025-06-02 17:38:14.794071 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794078 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794085 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794091 | orchestrator | 2025-06-02 17:38:14.794098 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-02 17:38:14.794104 | orchestrator | Monday 02 June 2025 17:33:57 +0000 (0:00:02.526) 0:00:30.088 *********** 2025-06-02 17:38:14.794111 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794117 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794124 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794131 | orchestrator | 2025-06-02 17:38:14.794137 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-02 17:38:14.794144 | orchestrator | Monday 02 June 2025 17:33:59 +0000 (0:00:02.191) 0:00:32.279 *********** 2025-06-02 17:38:14.794151 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794157 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794164 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794170 | orchestrator | 2025-06-02 17:38:14.794177 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-02 17:38:14.794184 | orchestrator | Monday 02 June 2025 17:34:01 +0000 (0:00:02.003) 0:00:34.282 *********** 2025-06-02 17:38:14.794190 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794197 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794204 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794210 | orchestrator | 2025-06-02 17:38:14.794222 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-02 17:38:14.794229 | orchestrator | Monday 02 June 2025 17:34:02 +0000 (0:00:01.252) 0:00:35.535 *********** 2025-06-02 17:38:14.794235 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.794242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794249 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794255 | orchestrator | 2025-06-02 17:38:14.794262 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-02 17:38:14.794268 | orchestrator | Monday 02 June 2025 17:34:03 +0000 (0:00:00.734) 0:00:36.269 *********** 2025-06-02 17:38:14.794275 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:14.794282 | orchestrator | 2025-06-02 17:38:14.794289 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-02 17:38:14.794295 | orchestrator | Monday 02 June 2025 17:34:04 +0000 (0:00:01.237) 0:00:37.507 *********** 2025-06-02 17:38:14.794302 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794309 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794316 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794322 | orchestrator | 2025-06-02 17:38:14.794329 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-02 17:38:14.794335 | orchestrator | Monday 02 June 2025 17:34:09 +0000 (0:00:04.497) 0:00:42.005 *********** 2025-06-02 17:38:14.794342 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794349 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794355 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794362 | orchestrator | 2025-06-02 17:38:14.794369 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-02 17:38:14.794375 | orchestrator | Monday 02 June 2025 17:34:10 +0000 (0:00:01.838) 0:00:43.843 *********** 2025-06-02 17:38:14.794382 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794393 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794400 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794406 | orchestrator | 2025-06-02 17:38:14.794413 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-02 17:38:14.794420 | orchestrator | Monday 02 June 2025 17:34:11 +0000 (0:00:01.046) 0:00:44.890 *********** 2025-06-02 17:38:14.794426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794433 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794439 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794446 | orchestrator | 2025-06-02 17:38:14.794453 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-02 17:38:14.794459 | orchestrator | Monday 02 June 2025 17:34:16 +0000 (0:00:04.599) 0:00:49.489 *********** 2025-06-02 17:38:14.794466 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.794472 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794486 | orchestrator | 2025-06-02 17:38:14.794492 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-02 17:38:14.794499 | orchestrator | Monday 02 June 2025 17:34:16 +0000 (0:00:00.342) 0:00:49.832 *********** 2025-06-02 17:38:14.794505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.794512 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794518 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794525 | orchestrator | 2025-06-02 17:38:14.794532 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-02 17:38:14.794539 | orchestrator | Monday 02 June 2025 17:34:17 +0000 (0:00:00.335) 0:00:50.167 *********** 2025-06-02 17:38:14.794564 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794571 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.794578 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.794584 | orchestrator | 2025-06-02 17:38:14.794591 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-02 17:38:14.794598 | orchestrator | Monday 02 June 2025 17:34:19 +0000 (0:00:02.520) 0:00:52.688 *********** 2025-06-02 17:38:14.794609 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:14.794617 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:14.794624 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-02 17:38:14.794630 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:14.794637 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:14.794644 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-02 17:38:14.794650 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:14.794657 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:14.794664 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-02 17:38:14.794670 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:14.794681 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:14.794692 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-02 17:38:14.794699 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 17:38:14.794706 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 17:38:14.794712 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-02 17:38:14.794719 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794726 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794732 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794739 | orchestrator | 2025-06-02 17:38:14.794746 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-02 17:38:14.794753 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:55.268) 0:01:47.956 *********** 2025-06-02 17:38:14.794759 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.794766 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.794772 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.794779 | orchestrator | 2025-06-02 17:38:14.794785 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-02 17:38:14.794792 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.354) 0:01:48.311 *********** 2025-06-02 17:38:14.794799 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794806 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.794813 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.794819 | orchestrator | 2025-06-02 17:38:14.794826 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-02 17:38:14.794832 | orchestrator | Monday 02 June 2025 17:35:16 +0000 (0:00:01.242) 0:01:49.553 *********** 2025-06-02 17:38:14.794839 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794846 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.794852 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.794859 | orchestrator | 2025-06-02 17:38:14.794865 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-02 17:38:14.794872 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:01.172) 0:01:50.726 *********** 2025-06-02 17:38:14.794879 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.794885 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.794892 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.794898 | orchestrator | 2025-06-02 17:38:14.794905 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-02 17:38:14.794911 | orchestrator | Monday 02 June 2025 17:35:32 +0000 (0:00:14.846) 0:02:05.572 *********** 2025-06-02 17:38:14.794918 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794925 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794931 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794938 | orchestrator | 2025-06-02 17:38:14.794945 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-02 17:38:14.794951 | orchestrator | Monday 02 June 2025 17:35:33 +0000 (0:00:00.658) 0:02:06.231 *********** 2025-06-02 17:38:14.794958 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.794964 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.794971 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.794977 | orchestrator | 2025-06-02 17:38:14.794984 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-02 17:38:14.794991 | orchestrator | Monday 02 June 2025 17:35:34 +0000 (0:00:00.785) 0:02:07.016 *********** 2025-06-02 17:38:14.794997 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.795004 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.795011 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.795018 | orchestrator | 2025-06-02 17:38:14.795028 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-02 17:38:14.795040 | orchestrator | Monday 02 June 2025 17:35:34 +0000 (0:00:00.749) 0:02:07.766 *********** 2025-06-02 17:38:14.795047 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.795053 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.795060 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.795066 | orchestrator | 2025-06-02 17:38:14.795073 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-02 17:38:14.795080 | orchestrator | Monday 02 June 2025 17:35:35 +0000 (0:00:00.996) 0:02:08.763 *********** 2025-06-02 17:38:14.795086 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.795093 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.795099 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.795106 | orchestrator | 2025-06-02 17:38:14.795113 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-02 17:38:14.795119 | orchestrator | Monday 02 June 2025 17:35:36 +0000 (0:00:00.388) 0:02:09.152 *********** 2025-06-02 17:38:14.795126 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.795132 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.795139 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.795146 | orchestrator | 2025-06-02 17:38:14.795152 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-02 17:38:14.795159 | orchestrator | Monday 02 June 2025 17:35:36 +0000 (0:00:00.762) 0:02:09.914 *********** 2025-06-02 17:38:14.795166 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.795172 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.795179 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.795186 | orchestrator | 2025-06-02 17:38:14.795192 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-02 17:38:14.795199 | orchestrator | Monday 02 June 2025 17:35:37 +0000 (0:00:00.773) 0:02:10.687 *********** 2025-06-02 17:38:14.795205 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.795212 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.795218 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.795225 | orchestrator | 2025-06-02 17:38:14.795232 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-02 17:38:14.795238 | orchestrator | Monday 02 June 2025 17:35:39 +0000 (0:00:01.654) 0:02:12.342 *********** 2025-06-02 17:38:14.795245 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:38:14.795251 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:38:14.795258 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:38:14.795264 | orchestrator | 2025-06-02 17:38:14.795271 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-02 17:38:14.795278 | orchestrator | Monday 02 June 2025 17:35:40 +0000 (0:00:00.912) 0:02:13.254 *********** 2025-06-02 17:38:14.795284 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.795291 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.795298 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.795304 | orchestrator | 2025-06-02 17:38:14.795311 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-02 17:38:14.795318 | orchestrator | Monday 02 June 2025 17:35:40 +0000 (0:00:00.343) 0:02:13.598 *********** 2025-06-02 17:38:14.795324 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.795331 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.795337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.795344 | orchestrator | 2025-06-02 17:38:14.795350 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-02 17:38:14.795357 | orchestrator | Monday 02 June 2025 17:35:41 +0000 (0:00:00.337) 0:02:13.936 *********** 2025-06-02 17:38:14.795364 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.795370 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.795377 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.795384 | orchestrator | 2025-06-02 17:38:14.795390 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-02 17:38:14.795397 | orchestrator | Monday 02 June 2025 17:35:42 +0000 (0:00:01.406) 0:02:15.343 *********** 2025-06-02 17:38:14.795408 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.795414 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.795421 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.795427 | orchestrator | 2025-06-02 17:38:14.804301 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-02 17:38:14.804390 | orchestrator | Monday 02 June 2025 17:35:43 +0000 (0:00:00.712) 0:02:16.055 *********** 2025-06-02 17:38:14.804399 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:14.804407 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:14.804413 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-02 17:38:14.804419 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:14.804426 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:14.804432 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-02 17:38:14.804438 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:14.804443 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:14.804449 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-02 17:38:14.804454 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-02 17:38:14.804460 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:14.804466 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:14.804491 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-02 17:38:14.804497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:14.804502 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:14.804508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:14.804514 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-02 17:38:14.804520 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:14.804525 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-02 17:38:14.804531 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-02 17:38:14.804537 | orchestrator | 2025-06-02 17:38:14.804593 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-02 17:38:14.804601 | orchestrator | 2025-06-02 17:38:14.804608 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-02 17:38:14.804613 | orchestrator | Monday 02 June 2025 17:35:46 +0000 (0:00:03.812) 0:02:19.868 *********** 2025-06-02 17:38:14.804619 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.804626 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.804631 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.804637 | orchestrator | 2025-06-02 17:38:14.804643 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-02 17:38:14.804648 | orchestrator | Monday 02 June 2025 17:35:47 +0000 (0:00:00.553) 0:02:20.422 *********** 2025-06-02 17:38:14.804654 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.804660 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.804665 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.804689 | orchestrator | 2025-06-02 17:38:14.804695 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-02 17:38:14.804700 | orchestrator | Monday 02 June 2025 17:35:48 +0000 (0:00:00.661) 0:02:21.083 *********** 2025-06-02 17:38:14.804706 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.804712 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.804718 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.804723 | orchestrator | 2025-06-02 17:38:14.804729 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-02 17:38:14.804735 | orchestrator | Monday 02 June 2025 17:35:48 +0000 (0:00:00.342) 0:02:21.425 *********** 2025-06-02 17:38:14.804741 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:38:14.804747 | orchestrator | 2025-06-02 17:38:14.804752 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-02 17:38:14.804758 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:00.722) 0:02:22.148 *********** 2025-06-02 17:38:14.804764 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.804771 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.804776 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.804782 | orchestrator | 2025-06-02 17:38:14.804788 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-02 17:38:14.804793 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:00.312) 0:02:22.460 *********** 2025-06-02 17:38:14.804799 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.804805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.804811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.804816 | orchestrator | 2025-06-02 17:38:14.804822 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-02 17:38:14.804828 | orchestrator | Monday 02 June 2025 17:35:49 +0000 (0:00:00.347) 0:02:22.808 *********** 2025-06-02 17:38:14.804833 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.804839 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.804845 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.804850 | orchestrator | 2025-06-02 17:38:14.804856 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-02 17:38:14.804862 | orchestrator | Monday 02 June 2025 17:35:50 +0000 (0:00:00.325) 0:02:23.133 *********** 2025-06-02 17:38:14.804867 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.804873 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.804878 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.804884 | orchestrator | 2025-06-02 17:38:14.804890 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-02 17:38:14.804895 | orchestrator | Monday 02 June 2025 17:35:51 +0000 (0:00:01.383) 0:02:24.516 *********** 2025-06-02 17:38:14.804902 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:38:14.804907 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:38:14.804913 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:38:14.804919 | orchestrator | 2025-06-02 17:38:14.804924 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 17:38:14.804930 | orchestrator | 2025-06-02 17:38:14.804936 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 17:38:14.804941 | orchestrator | Monday 02 June 2025 17:36:00 +0000 (0:00:08.663) 0:02:33.180 *********** 2025-06-02 17:38:14.804947 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.804953 | orchestrator | 2025-06-02 17:38:14.804959 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 17:38:14.804964 | orchestrator | Monday 02 June 2025 17:36:00 +0000 (0:00:00.721) 0:02:33.901 *********** 2025-06-02 17:38:14.804970 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.804976 | orchestrator | 2025-06-02 17:38:14.804981 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:38:14.804987 | orchestrator | Monday 02 June 2025 17:36:01 +0000 (0:00:00.422) 0:02:34.323 *********** 2025-06-02 17:38:14.804997 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:38:14.805003 | orchestrator | 2025-06-02 17:38:14.805016 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:38:14.805022 | orchestrator | Monday 02 June 2025 17:36:02 +0000 (0:00:00.874) 0:02:35.197 *********** 2025-06-02 17:38:14.805028 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805034 | orchestrator | 2025-06-02 17:38:14.805041 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 17:38:14.805051 | orchestrator | Monday 02 June 2025 17:36:02 +0000 (0:00:00.689) 0:02:35.887 *********** 2025-06-02 17:38:14.805058 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805063 | orchestrator | 2025-06-02 17:38:14.805069 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 17:38:14.805075 | orchestrator | Monday 02 June 2025 17:36:03 +0000 (0:00:00.479) 0:02:36.366 *********** 2025-06-02 17:38:14.805081 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:38:14.805087 | orchestrator | 2025-06-02 17:38:14.805092 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 17:38:14.805098 | orchestrator | Monday 02 June 2025 17:36:05 +0000 (0:00:01.587) 0:02:37.954 *********** 2025-06-02 17:38:14.805103 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:38:14.805109 | orchestrator | 2025-06-02 17:38:14.805118 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 17:38:14.805124 | orchestrator | Monday 02 June 2025 17:36:05 +0000 (0:00:00.922) 0:02:38.876 *********** 2025-06-02 17:38:14.805130 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805135 | orchestrator | 2025-06-02 17:38:14.805141 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 17:38:14.805147 | orchestrator | Monday 02 June 2025 17:36:06 +0000 (0:00:00.462) 0:02:39.339 *********** 2025-06-02 17:38:14.805152 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805158 | orchestrator | 2025-06-02 17:38:14.805164 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-02 17:38:14.805169 | orchestrator | 2025-06-02 17:38:14.805175 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-02 17:38:14.805181 | orchestrator | Monday 02 June 2025 17:36:06 +0000 (0:00:00.498) 0:02:39.838 *********** 2025-06-02 17:38:14.805186 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.805192 | orchestrator | 2025-06-02 17:38:14.805197 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-02 17:38:14.805203 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:00.139) 0:02:39.978 *********** 2025-06-02 17:38:14.805209 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:38:14.805215 | orchestrator | 2025-06-02 17:38:14.805220 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-02 17:38:14.805226 | orchestrator | Monday 02 June 2025 17:36:07 +0000 (0:00:00.431) 0:02:40.409 *********** 2025-06-02 17:38:14.805232 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.805237 | orchestrator | 2025-06-02 17:38:14.805243 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-02 17:38:14.805249 | orchestrator | Monday 02 June 2025 17:36:08 +0000 (0:00:00.974) 0:02:41.384 *********** 2025-06-02 17:38:14.805254 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.805260 | orchestrator | 2025-06-02 17:38:14.805266 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-02 17:38:14.805271 | orchestrator | Monday 02 June 2025 17:36:10 +0000 (0:00:01.634) 0:02:43.018 *********** 2025-06-02 17:38:14.805277 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805282 | orchestrator | 2025-06-02 17:38:14.805288 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-02 17:38:14.805294 | orchestrator | Monday 02 June 2025 17:36:10 +0000 (0:00:00.800) 0:02:43.818 *********** 2025-06-02 17:38:14.805300 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.805310 | orchestrator | 2025-06-02 17:38:14.805316 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-02 17:38:14.805321 | orchestrator | Monday 02 June 2025 17:36:11 +0000 (0:00:00.463) 0:02:44.282 *********** 2025-06-02 17:38:14.805327 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805332 | orchestrator | 2025-06-02 17:38:14.805338 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-02 17:38:14.805344 | orchestrator | Monday 02 June 2025 17:36:18 +0000 (0:00:07.464) 0:02:51.747 *********** 2025-06-02 17:38:14.805350 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.805355 | orchestrator | 2025-06-02 17:38:14.805361 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-02 17:38:14.805367 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:12.875) 0:03:04.623 *********** 2025-06-02 17:38:14.805372 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.805378 | orchestrator | 2025-06-02 17:38:14.805384 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-02 17:38:14.805389 | orchestrator | 2025-06-02 17:38:14.805395 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-02 17:38:14.805401 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.449) 0:03:05.072 *********** 2025-06-02 17:38:14.805407 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.805412 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.805418 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.805424 | orchestrator | 2025-06-02 17:38:14.805430 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-02 17:38:14.805435 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.489) 0:03:05.562 *********** 2025-06-02 17:38:14.805441 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805447 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.805452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.805458 | orchestrator | 2025-06-02 17:38:14.805464 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-02 17:38:14.805469 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.271) 0:03:05.833 *********** 2025-06-02 17:38:14.805475 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:38:14.805481 | orchestrator | 2025-06-02 17:38:14.805486 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-02 17:38:14.805496 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:00.491) 0:03:06.325 *********** 2025-06-02 17:38:14.805502 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805508 | orchestrator | 2025-06-02 17:38:14.805514 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-02 17:38:14.805519 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:01.263) 0:03:07.588 *********** 2025-06-02 17:38:14.805525 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805531 | orchestrator | 2025-06-02 17:38:14.805537 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-02 17:38:14.805584 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:01.124) 0:03:08.712 *********** 2025-06-02 17:38:14.805596 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805606 | orchestrator | 2025-06-02 17:38:14.805615 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-02 17:38:14.805626 | orchestrator | Monday 02 June 2025 17:36:36 +0000 (0:00:00.282) 0:03:08.994 *********** 2025-06-02 17:38:14.805635 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805641 | orchestrator | 2025-06-02 17:38:14.805651 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-02 17:38:14.805657 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:01.188) 0:03:10.182 *********** 2025-06-02 17:38:14.805662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805668 | orchestrator | 2025-06-02 17:38:14.805674 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-02 17:38:14.805684 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.240) 0:03:10.423 *********** 2025-06-02 17:38:14.805690 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805696 | orchestrator | 2025-06-02 17:38:14.805701 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-02 17:38:14.805708 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.197) 0:03:10.620 *********** 2025-06-02 17:38:14.805714 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805720 | orchestrator | 2025-06-02 17:38:14.805726 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-02 17:38:14.805734 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:00.245) 0:03:10.866 *********** 2025-06-02 17:38:14.805744 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805752 | orchestrator | 2025-06-02 17:38:14.805758 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-02 17:38:14.805764 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:00.310) 0:03:11.176 *********** 2025-06-02 17:38:14.805770 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805776 | orchestrator | 2025-06-02 17:38:14.805782 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-02 17:38:14.805788 | orchestrator | Monday 02 June 2025 17:36:44 +0000 (0:00:06.137) 0:03:17.314 *********** 2025-06-02 17:38:14.805794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-02 17:38:14.805801 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-02 17:38:14.805807 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-02 17:38:14.805813 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-02 17:38:14.805819 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-02 17:38:14.805825 | orchestrator | 2025-06-02 17:38:14.805831 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-02 17:38:14.805837 | orchestrator | Monday 02 June 2025 17:37:43 +0000 (0:00:58.938) 0:04:16.253 *********** 2025-06-02 17:38:14.805844 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805850 | orchestrator | 2025-06-02 17:38:14.805856 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-02 17:38:14.805862 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:01.691) 0:04:17.944 *********** 2025-06-02 17:38:14.805868 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805874 | orchestrator | 2025-06-02 17:38:14.805880 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-02 17:38:14.805887 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:02.009) 0:04:19.954 *********** 2025-06-02 17:38:14.805893 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:38:14.805899 | orchestrator | 2025-06-02 17:38:14.805905 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-02 17:38:14.805911 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:01.683) 0:04:21.637 *********** 2025-06-02 17:38:14.805917 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805923 | orchestrator | 2025-06-02 17:38:14.805929 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-02 17:38:14.805935 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.284) 0:04:21.921 *********** 2025-06-02 17:38:14.805941 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-02 17:38:14.805948 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-02 17:38:14.805954 | orchestrator | 2025-06-02 17:38:14.805960 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-02 17:38:14.805966 | orchestrator | Monday 02 June 2025 17:37:51 +0000 (0:00:02.407) 0:04:24.329 *********** 2025-06-02 17:38:14.805972 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.805981 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.805987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.805994 | orchestrator | 2025-06-02 17:38:14.806000 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-02 17:38:14.806006 | orchestrator | Monday 02 June 2025 17:37:51 +0000 (0:00:00.354) 0:04:24.684 *********** 2025-06-02 17:38:14.806012 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.806050 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.806057 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.806063 | orchestrator | 2025-06-02 17:38:14.806074 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-02 17:38:14.806081 | orchestrator | 2025-06-02 17:38:14.806087 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-02 17:38:14.806093 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:00.842) 0:04:25.526 *********** 2025-06-02 17:38:14.806099 | orchestrator | ok: [testbed-manager] 2025-06-02 17:38:14.806105 | orchestrator | 2025-06-02 17:38:14.806112 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-02 17:38:14.806118 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:00.374) 0:04:25.900 *********** 2025-06-02 17:38:14.806124 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-02 17:38:14.806130 | orchestrator | 2025-06-02 17:38:14.806136 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-02 17:38:14.806142 | orchestrator | Monday 02 June 2025 17:37:53 +0000 (0:00:00.255) 0:04:26.156 *********** 2025-06-02 17:38:14.806148 | orchestrator | changed: [testbed-manager] 2025-06-02 17:38:14.806154 | orchestrator | 2025-06-02 17:38:14.806161 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-02 17:38:14.806167 | orchestrator | 2025-06-02 17:38:14.806176 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-02 17:38:14.806182 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:05.833) 0:04:31.990 *********** 2025-06-02 17:38:14.806188 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:38:14.806195 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:38:14.806201 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:38:14.806207 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:38:14.806213 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:38:14.806219 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:38:14.806225 | orchestrator | 2025-06-02 17:38:14.806231 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-02 17:38:14.806237 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:00.748) 0:04:32.739 *********** 2025-06-02 17:38:14.806243 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:14.806249 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:14.806255 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-02 17:38:14.806262 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:14.806268 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:14.806274 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-02 17:38:14.806280 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:14.806286 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:14.806292 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-02 17:38:14.806298 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:14.806304 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:14.806315 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-02 17:38:14.806321 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:14.806327 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:14.806333 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-02 17:38:14.806339 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:14.806345 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:14.806351 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-02 17:38:14.806357 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:14.806363 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:14.806369 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-02 17:38:14.806375 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:14.806381 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:14.806387 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-02 17:38:14.806393 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:14.806399 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:14.806405 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-02 17:38:14.806411 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:14.806417 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:14.806423 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-02 17:38:14.806429 | orchestrator | 2025-06-02 17:38:14.806441 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-02 17:38:14.806451 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:10.879) 0:04:43.618 *********** 2025-06-02 17:38:14.806457 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.806463 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.806469 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.806475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.806481 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.806487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.806493 | orchestrator | 2025-06-02 17:38:14.806500 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-02 17:38:14.806506 | orchestrator | Monday 02 June 2025 17:38:11 +0000 (0:00:00.603) 0:04:44.222 *********** 2025-06-02 17:38:14.806512 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:38:14.806518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:38:14.806524 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:38:14.806530 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:38:14.806536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:38:14.806557 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:38:14.806565 | orchestrator | 2025-06-02 17:38:14.806574 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:38:14.806581 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:38:14.806588 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-02 17:38:14.806599 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 17:38:14.806606 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-02 17:38:14.806612 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:14.806618 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:14.806624 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-02 17:38:14.806630 | orchestrator | 2025-06-02 17:38:14.806636 | orchestrator | 2025-06-02 17:38:14.806642 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:38:14.806649 | orchestrator | Monday 02 June 2025 17:38:11 +0000 (0:00:00.590) 0:04:44.812 *********** 2025-06-02 17:38:14.806655 | orchestrator | =============================================================================== 2025-06-02 17:38:14.806661 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 58.94s 2025-06-02 17:38:14.806667 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.27s 2025-06-02 17:38:14.806673 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 14.85s 2025-06-02 17:38:14.806679 | orchestrator | kubectl : Install required packages ------------------------------------ 12.88s 2025-06-02 17:38:14.806685 | orchestrator | Manage labels ---------------------------------------------------------- 10.88s 2025-06-02 17:38:14.806691 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.66s 2025-06-02 17:38:14.806697 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.46s 2025-06-02 17:38:14.806703 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.16s 2025-06-02 17:38:14.806709 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.14s 2025-06-02 17:38:14.806715 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.83s 2025-06-02 17:38:14.806721 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 4.60s 2025-06-02 17:38:14.806727 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.50s 2025-06-02 17:38:14.806733 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.81s 2025-06-02 17:38:14.806739 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.53s 2025-06-02 17:38:14.806745 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.52s 2025-06-02 17:38:14.806751 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.41s 2025-06-02 17:38:14.806757 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.27s 2025-06-02 17:38:14.806764 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.19s 2025-06-02 17:38:14.806770 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.16s 2025-06-02 17:38:14.806776 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.01s 2025-06-02 17:38:14.806782 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:14.806792 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:14.806798 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:14.806804 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 4e3fcd5c-9a1a-4b0e-aa35-3a05fed4bf98 is in state STARTED 2025-06-02 17:38:14.806815 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 268b27ce-6ebc-4a1f-a611-5b678b61dd33 is in state STARTED 2025-06-02 17:38:14.806821 | orchestrator | 2025-06-02 17:38:14 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:14.806827 | orchestrator | 2025-06-02 17:38:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:17.863488 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:17.863793 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:17.864403 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:17.864945 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 4e3fcd5c-9a1a-4b0e-aa35-3a05fed4bf98 is in state STARTED 2025-06-02 17:38:17.867668 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 268b27ce-6ebc-4a1f-a611-5b678b61dd33 is in state STARTED 2025-06-02 17:38:17.869437 | orchestrator | 2025-06-02 17:38:17 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:17.869470 | orchestrator | 2025-06-02 17:38:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:20.933997 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:20.934935 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:20.935932 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:20.936802 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 4e3fcd5c-9a1a-4b0e-aa35-3a05fed4bf98 is in state STARTED 2025-06-02 17:38:20.937232 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 268b27ce-6ebc-4a1f-a611-5b678b61dd33 is in state SUCCESS 2025-06-02 17:38:20.938272 | orchestrator | 2025-06-02 17:38:20 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:20.938312 | orchestrator | 2025-06-02 17:38:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:23.974203 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:23.975854 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:23.979049 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:23.981256 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task 4e3fcd5c-9a1a-4b0e-aa35-3a05fed4bf98 is in state STARTED 2025-06-02 17:38:23.981946 | orchestrator | 2025-06-02 17:38:23 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:23.981979 | orchestrator | 2025-06-02 17:38:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:27.036465 | orchestrator | 2025-06-02 17:38:27 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:27.039429 | orchestrator | 2025-06-02 17:38:27 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:27.040851 | orchestrator | 2025-06-02 17:38:27 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:27.042688 | orchestrator | 2025-06-02 17:38:27 | INFO  | Task 4e3fcd5c-9a1a-4b0e-aa35-3a05fed4bf98 is in state SUCCESS 2025-06-02 17:38:27.043626 | orchestrator | 2025-06-02 17:38:27 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:27.044120 | orchestrator | 2025-06-02 17:38:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:30.089371 | orchestrator | 2025-06-02 17:38:30 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:30.091670 | orchestrator | 2025-06-02 17:38:30 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:30.094624 | orchestrator | 2025-06-02 17:38:30 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:30.096040 | orchestrator | 2025-06-02 17:38:30 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:30.096376 | orchestrator | 2025-06-02 17:38:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:33.140370 | orchestrator | 2025-06-02 17:38:33 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:33.142303 | orchestrator | 2025-06-02 17:38:33 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:33.147863 | orchestrator | 2025-06-02 17:38:33 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:33.149901 | orchestrator | 2025-06-02 17:38:33 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:33.149975 | orchestrator | 2025-06-02 17:38:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:36.206450 | orchestrator | 2025-06-02 17:38:36 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:36.206648 | orchestrator | 2025-06-02 17:38:36 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:36.206679 | orchestrator | 2025-06-02 17:38:36 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:36.206700 | orchestrator | 2025-06-02 17:38:36 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:36.206720 | orchestrator | 2025-06-02 17:38:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:39.257294 | orchestrator | 2025-06-02 17:38:39 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:39.258671 | orchestrator | 2025-06-02 17:38:39 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:39.261631 | orchestrator | 2025-06-02 17:38:39 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:39.264632 | orchestrator | 2025-06-02 17:38:39 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:39.264828 | orchestrator | 2025-06-02 17:38:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:42.321144 | orchestrator | 2025-06-02 17:38:42 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:42.323094 | orchestrator | 2025-06-02 17:38:42 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:42.325807 | orchestrator | 2025-06-02 17:38:42 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:42.328822 | orchestrator | 2025-06-02 17:38:42 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:42.328863 | orchestrator | 2025-06-02 17:38:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:45.377394 | orchestrator | 2025-06-02 17:38:45 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:45.379237 | orchestrator | 2025-06-02 17:38:45 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:45.383141 | orchestrator | 2025-06-02 17:38:45 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:45.386180 | orchestrator | 2025-06-02 17:38:45 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:45.386218 | orchestrator | 2025-06-02 17:38:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:48.422381 | orchestrator | 2025-06-02 17:38:48 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:48.422515 | orchestrator | 2025-06-02 17:38:48 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:48.422805 | orchestrator | 2025-06-02 17:38:48 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:48.423827 | orchestrator | 2025-06-02 17:38:48 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:48.423885 | orchestrator | 2025-06-02 17:38:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:51.469571 | orchestrator | 2025-06-02 17:38:51 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:51.473313 | orchestrator | 2025-06-02 17:38:51 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:51.476248 | orchestrator | 2025-06-02 17:38:51 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:51.479343 | orchestrator | 2025-06-02 17:38:51 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:51.479394 | orchestrator | 2025-06-02 17:38:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:54.524175 | orchestrator | 2025-06-02 17:38:54 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:54.526455 | orchestrator | 2025-06-02 17:38:54 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:54.528617 | orchestrator | 2025-06-02 17:38:54 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:54.530428 | orchestrator | 2025-06-02 17:38:54 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:54.530723 | orchestrator | 2025-06-02 17:38:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:38:57.580127 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:38:57.580699 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:38:57.582260 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:38:57.583877 | orchestrator | 2025-06-02 17:38:57 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:38:57.584410 | orchestrator | 2025-06-02 17:38:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:00.637782 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:00.637908 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:00.639151 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:39:00.640363 | orchestrator | 2025-06-02 17:39:00 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:00.640413 | orchestrator | 2025-06-02 17:39:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:03.672054 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:03.672799 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:03.673431 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:39:03.674238 | orchestrator | 2025-06-02 17:39:03 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:03.674272 | orchestrator | 2025-06-02 17:39:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:06.720639 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:06.721774 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:06.722430 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state STARTED 2025-06-02 17:39:06.723547 | orchestrator | 2025-06-02 17:39:06 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:06.723611 | orchestrator | 2025-06-02 17:39:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:09.758962 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:09.760033 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:09.761577 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task 518499a9-6184-4d50-893b-c847e0d23165 is in state SUCCESS 2025-06-02 17:39:09.763138 | orchestrator | 2025-06-02 17:39:09.763186 | orchestrator | 2025-06-02 17:39:09.763200 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-02 17:39:09.763218 | orchestrator | 2025-06-02 17:39:09.763239 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:39:09.763267 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:00.194) 0:00:00.194 *********** 2025-06-02 17:39:09.763286 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:39:09.763304 | orchestrator | 2025-06-02 17:39:09.763322 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:39:09.763338 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:00.862) 0:00:01.056 *********** 2025-06-02 17:39:09.763357 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:09.763377 | orchestrator | 2025-06-02 17:39:09.763396 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-02 17:39:09.763417 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:01.327) 0:00:02.384 *********** 2025-06-02 17:39:09.763434 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:09.763445 | orchestrator | 2025-06-02 17:39:09.763456 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:09.763468 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:09.763480 | orchestrator | 2025-06-02 17:39:09.763491 | orchestrator | 2025-06-02 17:39:09.763540 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:09.763551 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.513) 0:00:02.898 *********** 2025-06-02 17:39:09.763563 | orchestrator | =============================================================================== 2025-06-02 17:39:09.763574 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.33s 2025-06-02 17:39:09.763585 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.86s 2025-06-02 17:39:09.763596 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2025-06-02 17:39:09.763607 | orchestrator | 2025-06-02 17:39:09.763618 | orchestrator | 2025-06-02 17:39:09.763657 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-02 17:39:09.763668 | orchestrator | 2025-06-02 17:39:09.763679 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-02 17:39:09.763705 | orchestrator | Monday 02 June 2025 17:38:16 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-02 17:39:09.763741 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:09.763755 | orchestrator | 2025-06-02 17:39:09.763768 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-02 17:39:09.763782 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:00.680) 0:00:00.854 *********** 2025-06-02 17:39:09.763794 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:09.763806 | orchestrator | 2025-06-02 17:39:09.763819 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-02 17:39:09.763832 | orchestrator | Monday 02 June 2025 17:38:18 +0000 (0:00:00.691) 0:00:01.546 *********** 2025-06-02 17:39:09.763844 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-02 17:39:09.763857 | orchestrator | 2025-06-02 17:39:09.763869 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-02 17:39:09.763882 | orchestrator | Monday 02 June 2025 17:38:18 +0000 (0:00:00.785) 0:00:02.332 *********** 2025-06-02 17:39:09.763894 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:09.763906 | orchestrator | 2025-06-02 17:39:09.763918 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-02 17:39:09.763931 | orchestrator | Monday 02 June 2025 17:38:20 +0000 (0:00:01.278) 0:00:03.610 *********** 2025-06-02 17:39:09.763944 | orchestrator | changed: [testbed-manager] 2025-06-02 17:39:09.763957 | orchestrator | 2025-06-02 17:39:09.763970 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-02 17:39:09.763982 | orchestrator | Monday 02 June 2025 17:38:21 +0000 (0:00:00.916) 0:00:04.526 *********** 2025-06-02 17:39:09.763995 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:39:09.764007 | orchestrator | 2025-06-02 17:39:09.764020 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-02 17:39:09.764032 | orchestrator | Monday 02 June 2025 17:38:22 +0000 (0:00:01.650) 0:00:06.177 *********** 2025-06-02 17:39:09.764044 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:39:09.764057 | orchestrator | 2025-06-02 17:39:09.764070 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-02 17:39:09.764082 | orchestrator | Monday 02 June 2025 17:38:23 +0000 (0:00:00.862) 0:00:07.040 *********** 2025-06-02 17:39:09.764095 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:09.764105 | orchestrator | 2025-06-02 17:39:09.764116 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-02 17:39:09.764127 | orchestrator | Monday 02 June 2025 17:38:24 +0000 (0:00:00.474) 0:00:07.514 *********** 2025-06-02 17:39:09.764137 | orchestrator | ok: [testbed-manager] 2025-06-02 17:39:09.764148 | orchestrator | 2025-06-02 17:39:09.764159 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:09.764170 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:39:09.764181 | orchestrator | 2025-06-02 17:39:09.764192 | orchestrator | 2025-06-02 17:39:09.764202 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:09.764213 | orchestrator | Monday 02 June 2025 17:38:24 +0000 (0:00:00.415) 0:00:07.930 *********** 2025-06-02 17:39:09.764223 | orchestrator | =============================================================================== 2025-06-02 17:39:09.764234 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.65s 2025-06-02 17:39:09.764245 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.28s 2025-06-02 17:39:09.764256 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.92s 2025-06-02 17:39:09.764282 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2025-06-02 17:39:09.764329 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.79s 2025-06-02 17:39:09.764341 | orchestrator | Create .kube directory -------------------------------------------------- 0.69s 2025-06-02 17:39:09.764351 | orchestrator | Get home directory of operator user ------------------------------------- 0.68s 2025-06-02 17:39:09.764362 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.47s 2025-06-02 17:39:09.764373 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.42s 2025-06-02 17:39:09.764383 | orchestrator | 2025-06-02 17:39:09.764394 | orchestrator | 2025-06-02 17:39:09.764405 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-02 17:39:09.764416 | orchestrator | 2025-06-02 17:39:09.764426 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 17:39:09.764437 | orchestrator | Monday 02 June 2025 17:36:47 +0000 (0:00:00.211) 0:00:00.211 *********** 2025-06-02 17:39:09.764448 | orchestrator | ok: [localhost] => { 2025-06-02 17:39:09.764460 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-02 17:39:09.764471 | orchestrator | } 2025-06-02 17:39:09.764482 | orchestrator | 2025-06-02 17:39:09.764493 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-02 17:39:09.764522 | orchestrator | Monday 02 June 2025 17:36:48 +0000 (0:00:00.098) 0:00:00.310 *********** 2025-06-02 17:39:09.764549 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-02 17:39:09.764563 | orchestrator | ...ignoring 2025-06-02 17:39:09.764574 | orchestrator | 2025-06-02 17:39:09.764585 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-02 17:39:09.764595 | orchestrator | Monday 02 June 2025 17:36:51 +0000 (0:00:03.621) 0:00:03.932 *********** 2025-06-02 17:39:09.764606 | orchestrator | skipping: [localhost] 2025-06-02 17:39:09.764616 | orchestrator | 2025-06-02 17:39:09.764627 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-02 17:39:09.764637 | orchestrator | Monday 02 June 2025 17:36:51 +0000 (0:00:00.081) 0:00:04.013 *********** 2025-06-02 17:39:09.764648 | orchestrator | ok: [localhost] 2025-06-02 17:39:09.764659 | orchestrator | 2025-06-02 17:39:09.764676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:39:09.764687 | orchestrator | 2025-06-02 17:39:09.764698 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:39:09.764708 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:00.250) 0:00:04.264 *********** 2025-06-02 17:39:09.764719 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:09.764730 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:09.764740 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:09.764751 | orchestrator | 2025-06-02 17:39:09.764761 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:39:09.764772 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:00.318) 0:00:04.582 *********** 2025-06-02 17:39:09.764782 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-02 17:39:09.764794 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-02 17:39:09.764804 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-02 17:39:09.764815 | orchestrator | 2025-06-02 17:39:09.764826 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-02 17:39:09.764836 | orchestrator | 2025-06-02 17:39:09.764847 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:39:09.764858 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:00.626) 0:00:05.209 *********** 2025-06-02 17:39:09.764868 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:09.764879 | orchestrator | 2025-06-02 17:39:09.764890 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 17:39:09.764907 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.651) 0:00:05.861 *********** 2025-06-02 17:39:09.764918 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:09.764929 | orchestrator | 2025-06-02 17:39:09.764939 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-02 17:39:09.764950 | orchestrator | Monday 02 June 2025 17:36:54 +0000 (0:00:01.109) 0:00:06.971 *********** 2025-06-02 17:39:09.764960 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.764971 | orchestrator | 2025-06-02 17:39:09.764981 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-02 17:39:09.764992 | orchestrator | Monday 02 June 2025 17:36:55 +0000 (0:00:00.634) 0:00:07.606 *********** 2025-06-02 17:39:09.765003 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765013 | orchestrator | 2025-06-02 17:39:09.765024 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-02 17:39:09.765034 | orchestrator | Monday 02 June 2025 17:36:55 +0000 (0:00:00.606) 0:00:08.212 *********** 2025-06-02 17:39:09.765045 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765055 | orchestrator | 2025-06-02 17:39:09.765066 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-02 17:39:09.765077 | orchestrator | Monday 02 June 2025 17:36:56 +0000 (0:00:00.435) 0:00:08.648 *********** 2025-06-02 17:39:09.765087 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765098 | orchestrator | 2025-06-02 17:39:09.765108 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:39:09.765119 | orchestrator | Monday 02 June 2025 17:36:57 +0000 (0:00:00.778) 0:00:09.426 *********** 2025-06-02 17:39:09.765130 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:09.765140 | orchestrator | 2025-06-02 17:39:09.765151 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-02 17:39:09.765168 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:01.429) 0:00:10.855 *********** 2025-06-02 17:39:09.765179 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:09.765190 | orchestrator | 2025-06-02 17:39:09.765201 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-02 17:39:09.765211 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:00.977) 0:00:11.833 *********** 2025-06-02 17:39:09.765222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765232 | orchestrator | 2025-06-02 17:39:09.765243 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-02 17:39:09.765253 | orchestrator | Monday 02 June 2025 17:37:00 +0000 (0:00:00.528) 0:00:12.361 *********** 2025-06-02 17:39:09.765264 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765274 | orchestrator | 2025-06-02 17:39:09.765285 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-02 17:39:09.765296 | orchestrator | Monday 02 June 2025 17:37:00 +0000 (0:00:00.463) 0:00:12.825 *********** 2025-06-02 17:39:09.765339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765404 | orchestrator | 2025-06-02 17:39:09.765416 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-02 17:39:09.765427 | orchestrator | Monday 02 June 2025 17:37:01 +0000 (0:00:01.007) 0:00:13.833 *********** 2025-06-02 17:39:09.765447 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.765496 | orchestrator | 2025-06-02 17:39:09.765522 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-02 17:39:09.765533 | orchestrator | Monday 02 June 2025 17:37:03 +0000 (0:00:02.177) 0:00:16.010 *********** 2025-06-02 17:39:09.765544 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:39:09.765555 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:39:09.765566 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-02 17:39:09.765577 | orchestrator | 2025-06-02 17:39:09.765588 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-02 17:39:09.765598 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:02.411) 0:00:18.422 *********** 2025-06-02 17:39:09.765609 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:39:09.765620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:39:09.765631 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-02 17:39:09.765641 | orchestrator | 2025-06-02 17:39:09.765652 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-02 17:39:09.765669 | orchestrator | Monday 02 June 2025 17:37:09 +0000 (0:00:03.701) 0:00:22.124 *********** 2025-06-02 17:39:09.765680 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:39:09.765691 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:39:09.765702 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-02 17:39:09.765712 | orchestrator | 2025-06-02 17:39:09.765723 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-02 17:39:09.765734 | orchestrator | Monday 02 June 2025 17:37:11 +0000 (0:00:01.742) 0:00:23.866 *********** 2025-06-02 17:39:09.765744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:39:09.765755 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:39:09.765766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-02 17:39:09.765783 | orchestrator | 2025-06-02 17:39:09.765794 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-02 17:39:09.765805 | orchestrator | Monday 02 June 2025 17:37:14 +0000 (0:00:02.895) 0:00:26.762 *********** 2025-06-02 17:39:09.765815 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:39:09.765826 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:39:09.765837 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-02 17:39:09.765847 | orchestrator | 2025-06-02 17:39:09.765858 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-02 17:39:09.765868 | orchestrator | Monday 02 June 2025 17:37:16 +0000 (0:00:02.081) 0:00:28.843 *********** 2025-06-02 17:39:09.765879 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:39:09.765890 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:39:09.765907 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-02 17:39:09.765925 | orchestrator | 2025-06-02 17:39:09.765943 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-02 17:39:09.765960 | orchestrator | Monday 02 June 2025 17:37:18 +0000 (0:00:01.499) 0:00:30.343 *********** 2025-06-02 17:39:09.765977 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.765994 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:09.766012 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:09.766101 | orchestrator | 2025-06-02 17:39:09.766118 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-02 17:39:09.766129 | orchestrator | Monday 02 June 2025 17:37:18 +0000 (0:00:00.473) 0:00:30.816 *********** 2025-06-02 17:39:09.766141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.766574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.766635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:39:09.766648 | orchestrator | 2025-06-02 17:39:09.766659 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-02 17:39:09.766671 | orchestrator | Monday 02 June 2025 17:37:20 +0000 (0:00:01.605) 0:00:32.421 *********** 2025-06-02 17:39:09.766682 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:09.766693 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:09.766703 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:09.766714 | orchestrator | 2025-06-02 17:39:09.766724 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-02 17:39:09.766735 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:00.867) 0:00:33.289 *********** 2025-06-02 17:39:09.766769 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:09.766781 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:09.766792 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:09.766802 | orchestrator | 2025-06-02 17:39:09.766813 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-02 17:39:09.766824 | orchestrator | Monday 02 June 2025 17:37:28 +0000 (0:00:07.904) 0:00:41.194 *********** 2025-06-02 17:39:09.766835 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:09.766845 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:09.766856 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:09.766866 | orchestrator | 2025-06-02 17:39:09.766877 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:39:09.766888 | orchestrator | 2025-06-02 17:39:09.766898 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:39:09.766909 | orchestrator | Monday 02 June 2025 17:37:29 +0000 (0:00:01.017) 0:00:42.211 *********** 2025-06-02 17:39:09.766920 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:09.766931 | orchestrator | 2025-06-02 17:39:09.766942 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:39:09.766953 | orchestrator | Monday 02 June 2025 17:37:30 +0000 (0:00:00.728) 0:00:42.939 *********** 2025-06-02 17:39:09.766963 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:39:09.766974 | orchestrator | 2025-06-02 17:39:09.766985 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:39:09.766995 | orchestrator | Monday 02 June 2025 17:37:31 +0000 (0:00:00.314) 0:00:43.253 *********** 2025-06-02 17:39:09.767006 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:09.767016 | orchestrator | 2025-06-02 17:39:09.767027 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:39:09.767038 | orchestrator | Monday 02 June 2025 17:37:32 +0000 (0:00:01.737) 0:00:44.991 *********** 2025-06-02 17:39:09.767049 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:39:09.767060 | orchestrator | 2025-06-02 17:39:09.767071 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:39:09.767089 | orchestrator | 2025-06-02 17:39:09.767100 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:39:09.767111 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:56.407) 0:01:41.399 *********** 2025-06-02 17:39:09.767123 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:09.767134 | orchestrator | 2025-06-02 17:39:09.767144 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:39:09.767155 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:00.568) 0:01:41.967 *********** 2025-06-02 17:39:09.767166 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:39:09.767177 | orchestrator | 2025-06-02 17:39:09.767189 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:39:09.767200 | orchestrator | Monday 02 June 2025 17:38:30 +0000 (0:00:00.401) 0:01:42.368 *********** 2025-06-02 17:39:09.767211 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:09.767222 | orchestrator | 2025-06-02 17:39:09.767233 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:39:09.767244 | orchestrator | Monday 02 June 2025 17:38:32 +0000 (0:00:01.877) 0:01:44.246 *********** 2025-06-02 17:39:09.767256 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:39:09.767267 | orchestrator | 2025-06-02 17:39:09.767278 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-02 17:39:09.767288 | orchestrator | 2025-06-02 17:39:09.767299 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-02 17:39:09.767310 | orchestrator | Monday 02 June 2025 17:38:47 +0000 (0:00:15.503) 0:01:59.750 *********** 2025-06-02 17:39:09.767321 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:09.767333 | orchestrator | 2025-06-02 17:39:09.767356 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-02 17:39:09.767367 | orchestrator | Monday 02 June 2025 17:38:48 +0000 (0:00:00.615) 0:02:00.366 *********** 2025-06-02 17:39:09.767378 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:39:09.767389 | orchestrator | 2025-06-02 17:39:09.767400 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-02 17:39:09.767411 | orchestrator | Monday 02 June 2025 17:38:48 +0000 (0:00:00.236) 0:02:00.603 *********** 2025-06-02 17:39:09.767423 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:09.767434 | orchestrator | 2025-06-02 17:39:09.767444 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-02 17:39:09.767454 | orchestrator | Monday 02 June 2025 17:38:54 +0000 (0:00:06.523) 0:02:07.126 *********** 2025-06-02 17:39:09.767463 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:39:09.767473 | orchestrator | 2025-06-02 17:39:09.767482 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-02 17:39:09.767492 | orchestrator | 2025-06-02 17:39:09.767517 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-02 17:39:09.767527 | orchestrator | Monday 02 June 2025 17:39:03 +0000 (0:00:08.298) 0:02:15.425 *********** 2025-06-02 17:39:09.767536 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:39:09.767546 | orchestrator | 2025-06-02 17:39:09.767555 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-02 17:39:09.767565 | orchestrator | Monday 02 June 2025 17:39:04 +0000 (0:00:00.817) 0:02:16.243 *********** 2025-06-02 17:39:09.767574 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 17:39:09.767584 | orchestrator | enable_outward_rabbitmq_True 2025-06-02 17:39:09.767593 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 17:39:09.767603 | orchestrator | outward_rabbitmq_restart 2025-06-02 17:39:09.767612 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:39:09.767621 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:39:09.767631 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:39:09.767640 | orchestrator | 2025-06-02 17:39:09.767650 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-02 17:39:09.767659 | orchestrator | skipping: no hosts matched 2025-06-02 17:39:09.767675 | orchestrator | 2025-06-02 17:39:09.767684 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-02 17:39:09.767694 | orchestrator | skipping: no hosts matched 2025-06-02 17:39:09.767703 | orchestrator | 2025-06-02 17:39:09.767718 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-02 17:39:09.767727 | orchestrator | skipping: no hosts matched 2025-06-02 17:39:09.767737 | orchestrator | 2025-06-02 17:39:09.767746 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:39:09.767757 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 17:39:09.767767 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:39:09.767777 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:39:09.767786 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:39:09.767796 | orchestrator | 2025-06-02 17:39:09.767805 | orchestrator | 2025-06-02 17:39:09.767815 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:39:09.767825 | orchestrator | Monday 02 June 2025 17:39:06 +0000 (0:00:02.402) 0:02:18.645 *********** 2025-06-02 17:39:09.767834 | orchestrator | =============================================================================== 2025-06-02 17:39:09.767844 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.21s 2025-06-02 17:39:09.767853 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.14s 2025-06-02 17:39:09.767862 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.90s 2025-06-02 17:39:09.767872 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.70s 2025-06-02 17:39:09.767881 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.62s 2025-06-02 17:39:09.767891 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.90s 2025-06-02 17:39:09.767900 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.41s 2025-06-02 17:39:09.767910 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.40s 2025-06-02 17:39:09.767919 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.18s 2025-06-02 17:39:09.767929 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.08s 2025-06-02 17:39:09.767938 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.91s 2025-06-02 17:39:09.767947 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.74s 2025-06-02 17:39:09.767957 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.61s 2025-06-02 17:39:09.767966 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.50s 2025-06-02 17:39:09.767976 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.43s 2025-06-02 17:39:09.767985 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2025-06-02 17:39:09.767994 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.02s 2025-06-02 17:39:09.768008 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.01s 2025-06-02 17:39:09.768018 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2025-06-02 17:39:09.768028 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.95s 2025-06-02 17:39:09.768037 | orchestrator | 2025-06-02 17:39:09 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:09.768053 | orchestrator | 2025-06-02 17:39:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:12.806084 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:12.806227 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:12.806242 | orchestrator | 2025-06-02 17:39:12 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:12.806254 | orchestrator | 2025-06-02 17:39:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:15.848297 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:15.851871 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:15.854771 | orchestrator | 2025-06-02 17:39:15 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:15.854844 | orchestrator | 2025-06-02 17:39:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:18.896020 | orchestrator | 2025-06-02 17:39:18 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:18.896145 | orchestrator | 2025-06-02 17:39:18 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:18.897863 | orchestrator | 2025-06-02 17:39:18 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:18.897907 | orchestrator | 2025-06-02 17:39:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:21.938835 | orchestrator | 2025-06-02 17:39:21 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:21.941933 | orchestrator | 2025-06-02 17:39:21 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:21.942317 | orchestrator | 2025-06-02 17:39:21 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:21.942346 | orchestrator | 2025-06-02 17:39:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:24.990187 | orchestrator | 2025-06-02 17:39:24 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:24.998168 | orchestrator | 2025-06-02 17:39:24 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:24.998634 | orchestrator | 2025-06-02 17:39:24 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:24.998680 | orchestrator | 2025-06-02 17:39:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:28.040256 | orchestrator | 2025-06-02 17:39:28 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:28.046585 | orchestrator | 2025-06-02 17:39:28 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:28.047862 | orchestrator | 2025-06-02 17:39:28 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:28.049213 | orchestrator | 2025-06-02 17:39:28 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:28.049253 | orchestrator | 2025-06-02 17:39:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:31.099973 | orchestrator | 2025-06-02 17:39:31 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:31.100815 | orchestrator | 2025-06-02 17:39:31 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:31.101651 | orchestrator | 2025-06-02 17:39:31 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:31.102540 | orchestrator | 2025-06-02 17:39:31 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:31.102575 | orchestrator | 2025-06-02 17:39:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:34.143858 | orchestrator | 2025-06-02 17:39:34 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:34.144675 | orchestrator | 2025-06-02 17:39:34 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:34.145806 | orchestrator | 2025-06-02 17:39:34 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:34.146691 | orchestrator | 2025-06-02 17:39:34 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:34.146721 | orchestrator | 2025-06-02 17:39:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:37.209464 | orchestrator | 2025-06-02 17:39:37 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:37.209651 | orchestrator | 2025-06-02 17:39:37 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:37.209666 | orchestrator | 2025-06-02 17:39:37 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:37.209678 | orchestrator | 2025-06-02 17:39:37 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:37.209689 | orchestrator | 2025-06-02 17:39:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:40.247050 | orchestrator | 2025-06-02 17:39:40 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:40.247384 | orchestrator | 2025-06-02 17:39:40 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:40.248156 | orchestrator | 2025-06-02 17:39:40 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:40.250803 | orchestrator | 2025-06-02 17:39:40 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:40.250879 | orchestrator | 2025-06-02 17:39:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:43.281086 | orchestrator | 2025-06-02 17:39:43 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:43.281449 | orchestrator | 2025-06-02 17:39:43 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:43.285892 | orchestrator | 2025-06-02 17:39:43 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:43.286631 | orchestrator | 2025-06-02 17:39:43 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state STARTED 2025-06-02 17:39:43.286700 | orchestrator | 2025-06-02 17:39:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:46.348210 | orchestrator | 2025-06-02 17:39:46 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:46.349305 | orchestrator | 2025-06-02 17:39:46 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:46.351111 | orchestrator | 2025-06-02 17:39:46 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:46.351770 | orchestrator | 2025-06-02 17:39:46 | INFO  | Task 1cbf511b-c977-4ce7-9545-dcc159a88c42 is in state SUCCESS 2025-06-02 17:39:46.352100 | orchestrator | 2025-06-02 17:39:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:49.401343 | orchestrator | 2025-06-02 17:39:49 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:49.401587 | orchestrator | 2025-06-02 17:39:49 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:49.403317 | orchestrator | 2025-06-02 17:39:49 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:49.403362 | orchestrator | 2025-06-02 17:39:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:52.443358 | orchestrator | 2025-06-02 17:39:52 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:52.445008 | orchestrator | 2025-06-02 17:39:52 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:52.446234 | orchestrator | 2025-06-02 17:39:52 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:52.446546 | orchestrator | 2025-06-02 17:39:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:55.485528 | orchestrator | 2025-06-02 17:39:55 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:55.485791 | orchestrator | 2025-06-02 17:39:55 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:55.486938 | orchestrator | 2025-06-02 17:39:55 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:55.487256 | orchestrator | 2025-06-02 17:39:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:39:58.543948 | orchestrator | 2025-06-02 17:39:58 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:39:58.545580 | orchestrator | 2025-06-02 17:39:58 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:39:58.548243 | orchestrator | 2025-06-02 17:39:58 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:39:58.548285 | orchestrator | 2025-06-02 17:39:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:01.602654 | orchestrator | 2025-06-02 17:40:01 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:01.605845 | orchestrator | 2025-06-02 17:40:01 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:01.607765 | orchestrator | 2025-06-02 17:40:01 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:40:01.607805 | orchestrator | 2025-06-02 17:40:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:04.652677 | orchestrator | 2025-06-02 17:40:04 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:04.654676 | orchestrator | 2025-06-02 17:40:04 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:04.657215 | orchestrator | 2025-06-02 17:40:04 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:40:04.657799 | orchestrator | 2025-06-02 17:40:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:07.692738 | orchestrator | 2025-06-02 17:40:07 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:07.694668 | orchestrator | 2025-06-02 17:40:07 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:07.697666 | orchestrator | 2025-06-02 17:40:07 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:40:07.697725 | orchestrator | 2025-06-02 17:40:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:10.749403 | orchestrator | 2025-06-02 17:40:10 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:10.749934 | orchestrator | 2025-06-02 17:40:10 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:10.751415 | orchestrator | 2025-06-02 17:40:10 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state STARTED 2025-06-02 17:40:10.751538 | orchestrator | 2025-06-02 17:40:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:13.795651 | orchestrator | 2025-06-02 17:40:13 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:13.796840 | orchestrator | 2025-06-02 17:40:13 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:13.799496 | orchestrator | 2025-06-02 17:40:13 | INFO  | Task 1d0d5f72-9131-48dc-bb7f-5142f499ce24 is in state SUCCESS 2025-06-02 17:40:13.799681 | orchestrator | 2025-06-02 17:40:13.799705 | orchestrator | None 2025-06-02 17:40:13.811522 | orchestrator | 2025-06-02 17:40:13.811631 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:40:13.811644 | orchestrator | 2025-06-02 17:40:13.811653 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:40:13.811661 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:00.195) 0:00:00.195 *********** 2025-06-02 17:40:13.811670 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.811679 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.811687 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.811695 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:40:13.811703 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:40:13.811711 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:40:13.811719 | orchestrator | 2025-06-02 17:40:13.811727 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:40:13.811736 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:00.915) 0:00:01.110 *********** 2025-06-02 17:40:13.811744 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-02 17:40:13.811754 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-02 17:40:13.811762 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-02 17:40:13.811770 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-02 17:40:13.811778 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-02 17:40:13.811786 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-02 17:40:13.811793 | orchestrator | 2025-06-02 17:40:13.811802 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-02 17:40:13.811809 | orchestrator | 2025-06-02 17:40:13.811817 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-02 17:40:13.811825 | orchestrator | Monday 02 June 2025 17:37:43 +0000 (0:00:01.371) 0:00:02.482 *********** 2025-06-02 17:40:13.811835 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:40:13.811845 | orchestrator | 2025-06-02 17:40:13.811853 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-02 17:40:13.811861 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:02.112) 0:00:04.594 *********** 2025-06-02 17:40:13.811871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811934 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.811963 | orchestrator | 2025-06-02 17:40:13.811990 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-02 17:40:13.812000 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:02.608) 0:00:07.203 *********** 2025-06-02 17:40:13.812009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812018 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812061 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812070 | orchestrator | 2025-06-02 17:40:13.812080 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-02 17:40:13.812088 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:02.112) 0:00:09.315 *********** 2025-06-02 17:40:13.812102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812128 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812157 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812166 | orchestrator | 2025-06-02 17:40:13.812175 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-02 17:40:13.812184 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:02.054) 0:00:11.370 *********** 2025-06-02 17:40:13.812193 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812251 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812261 | orchestrator | 2025-06-02 17:40:13.812274 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-02 17:40:13.812283 | orchestrator | Monday 02 June 2025 17:37:54 +0000 (0:00:01.911) 0:00:13.281 *********** 2025-06-02 17:40:13.812293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812327 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812335 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812343 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.812351 | orchestrator | 2025-06-02 17:40:13.812360 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-02 17:40:13.812367 | orchestrator | Monday 02 June 2025 17:37:55 +0000 (0:00:01.486) 0:00:14.768 *********** 2025-06-02 17:40:13.812375 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.812383 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.812391 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.812399 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:40:13.812406 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:40:13.812430 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:40:13.812438 | orchestrator | 2025-06-02 17:40:13.812446 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-02 17:40:13.812459 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:02.621) 0:00:17.389 *********** 2025-06-02 17:40:13.812467 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-02 17:40:13.812476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-02 17:40:13.812483 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-02 17:40:13.812491 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-02 17:40:13.812499 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-02 17:40:13.812507 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-02 17:40:13.812522 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812535 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812543 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812550 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812567 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-02 17:40:13.812575 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812597 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812605 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812613 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812621 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-02 17:40:13.812629 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812637 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812645 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812653 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-02 17:40:13.812676 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812684 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812699 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812715 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-02 17:40:13.812723 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812730 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812738 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812746 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812754 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:40:13.812762 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-02 17:40:13.812773 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:40:13.812781 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:40:13.812789 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-02 17:40:13.812797 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:40:13.812805 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-02 17:40:13.812813 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-02 17:40:13.812825 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-02 17:40:13.812837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-02 17:40:13.812846 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-02 17:40:13.812853 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-02 17:40:13.812861 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:40:13.812869 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-02 17:40:13.812877 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:40:13.812884 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:40:13.812892 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-02 17:40:13.812900 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:40:13.812908 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-02 17:40:13.812915 | orchestrator | 2025-06-02 17:40:13.812923 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.812931 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:20.834) 0:00:38.223 *********** 2025-06-02 17:40:13.812939 | orchestrator | 2025-06-02 17:40:13.812947 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.812955 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.070) 0:00:38.294 *********** 2025-06-02 17:40:13.812962 | orchestrator | 2025-06-02 17:40:13.812970 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.812978 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.190) 0:00:38.485 *********** 2025-06-02 17:40:13.812986 | orchestrator | 2025-06-02 17:40:13.812993 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.813001 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.136) 0:00:38.622 *********** 2025-06-02 17:40:13.813009 | orchestrator | 2025-06-02 17:40:13.813017 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.813025 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.177) 0:00:38.800 *********** 2025-06-02 17:40:13.813032 | orchestrator | 2025-06-02 17:40:13.813040 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-02 17:40:13.813048 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:00.123) 0:00:38.923 *********** 2025-06-02 17:40:13.813056 | orchestrator | 2025-06-02 17:40:13.813064 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-02 17:40:13.813071 | orchestrator | Monday 02 June 2025 17:38:20 +0000 (0:00:00.199) 0:00:39.123 *********** 2025-06-02 17:40:13.813079 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813087 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:40:13.813095 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:40:13.813103 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813111 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:40:13.813118 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813126 | orchestrator | 2025-06-02 17:40:13.813134 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-02 17:40:13.813147 | orchestrator | Monday 02 June 2025 17:38:22 +0000 (0:00:02.192) 0:00:41.315 *********** 2025-06-02 17:40:13.813155 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.813163 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.813171 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:40:13.813179 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:40:13.813186 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:40:13.813194 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.813202 | orchestrator | 2025-06-02 17:40:13.813210 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-02 17:40:13.813217 | orchestrator | 2025-06-02 17:40:13.813225 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:40:13.813233 | orchestrator | Monday 02 June 2025 17:38:57 +0000 (0:00:35.006) 0:01:16.322 *********** 2025-06-02 17:40:13.813245 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:40:13.813253 | orchestrator | 2025-06-02 17:40:13.813261 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:40:13.813269 | orchestrator | Monday 02 June 2025 17:38:57 +0000 (0:00:00.522) 0:01:16.845 *********** 2025-06-02 17:40:13.813276 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:40:13.813284 | orchestrator | 2025-06-02 17:40:13.813292 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-02 17:40:13.813300 | orchestrator | Monday 02 June 2025 17:38:58 +0000 (0:00:00.748) 0:01:17.594 *********** 2025-06-02 17:40:13.813308 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813315 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813323 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813331 | orchestrator | 2025-06-02 17:40:13.813338 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-02 17:40:13.813346 | orchestrator | Monday 02 June 2025 17:38:59 +0000 (0:00:00.792) 0:01:18.386 *********** 2025-06-02 17:40:13.813354 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813362 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813369 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813377 | orchestrator | 2025-06-02 17:40:13.813389 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-02 17:40:13.813397 | orchestrator | Monday 02 June 2025 17:38:59 +0000 (0:00:00.330) 0:01:18.716 *********** 2025-06-02 17:40:13.813404 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813412 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813469 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813477 | orchestrator | 2025-06-02 17:40:13.813485 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-02 17:40:13.813492 | orchestrator | Monday 02 June 2025 17:38:59 +0000 (0:00:00.319) 0:01:19.036 *********** 2025-06-02 17:40:13.813500 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813508 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813516 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813524 | orchestrator | 2025-06-02 17:40:13.813532 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-02 17:40:13.813540 | orchestrator | Monday 02 June 2025 17:39:00 +0000 (0:00:00.533) 0:01:19.570 *********** 2025-06-02 17:40:13.813547 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.813555 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.813563 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.813571 | orchestrator | 2025-06-02 17:40:13.813579 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-02 17:40:13.813586 | orchestrator | Monday 02 June 2025 17:39:00 +0000 (0:00:00.359) 0:01:19.929 *********** 2025-06-02 17:40:13.813594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813602 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813624 | orchestrator | 2025-06-02 17:40:13.813632 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-02 17:40:13.813640 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:00.300) 0:01:20.230 *********** 2025-06-02 17:40:13.813648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813655 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813663 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813671 | orchestrator | 2025-06-02 17:40:13.813679 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-02 17:40:13.813687 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:00.309) 0:01:20.539 *********** 2025-06-02 17:40:13.813694 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813710 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813718 | orchestrator | 2025-06-02 17:40:13.813726 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-02 17:40:13.813734 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:00.559) 0:01:21.099 *********** 2025-06-02 17:40:13.813741 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813749 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813757 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813765 | orchestrator | 2025-06-02 17:40:13.813772 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-02 17:40:13.813780 | orchestrator | Monday 02 June 2025 17:39:02 +0000 (0:00:00.350) 0:01:21.450 *********** 2025-06-02 17:40:13.813788 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813795 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813803 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813811 | orchestrator | 2025-06-02 17:40:13.813819 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-02 17:40:13.813826 | orchestrator | Monday 02 June 2025 17:39:02 +0000 (0:00:00.328) 0:01:21.779 *********** 2025-06-02 17:40:13.813834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813842 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813850 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813857 | orchestrator | 2025-06-02 17:40:13.813865 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-02 17:40:13.813873 | orchestrator | Monday 02 June 2025 17:39:02 +0000 (0:00:00.319) 0:01:22.098 *********** 2025-06-02 17:40:13.813881 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813888 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813896 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813904 | orchestrator | 2025-06-02 17:40:13.813912 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-02 17:40:13.813919 | orchestrator | Monday 02 June 2025 17:39:03 +0000 (0:00:00.684) 0:01:22.783 *********** 2025-06-02 17:40:13.813927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813935 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813943 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.813950 | orchestrator | 2025-06-02 17:40:13.813958 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-02 17:40:13.813966 | orchestrator | Monday 02 June 2025 17:39:04 +0000 (0:00:00.339) 0:01:23.122 *********** 2025-06-02 17:40:13.813982 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.813990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.813997 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814003 | orchestrator | 2025-06-02 17:40:13.814010 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-02 17:40:13.814054 | orchestrator | Monday 02 June 2025 17:39:04 +0000 (0:00:00.314) 0:01:23.437 *********** 2025-06-02 17:40:13.814062 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814068 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814075 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814088 | orchestrator | 2025-06-02 17:40:13.814095 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-02 17:40:13.814102 | orchestrator | Monday 02 June 2025 17:39:04 +0000 (0:00:00.383) 0:01:23.821 *********** 2025-06-02 17:40:13.814108 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814121 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814128 | orchestrator | 2025-06-02 17:40:13.814134 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-02 17:40:13.814141 | orchestrator | Monday 02 June 2025 17:39:05 +0000 (0:00:00.535) 0:01:24.356 *********** 2025-06-02 17:40:13.814147 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814154 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814166 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814173 | orchestrator | 2025-06-02 17:40:13.814179 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-02 17:40:13.814186 | orchestrator | Monday 02 June 2025 17:39:05 +0000 (0:00:00.308) 0:01:24.665 *********** 2025-06-02 17:40:13.814192 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:40:13.814199 | orchestrator | 2025-06-02 17:40:13.814206 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-02 17:40:13.814212 | orchestrator | Monday 02 June 2025 17:39:06 +0000 (0:00:00.568) 0:01:25.233 *********** 2025-06-02 17:40:13.814219 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.814225 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.814232 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.814239 | orchestrator | 2025-06-02 17:40:13.814246 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-02 17:40:13.814253 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:00.979) 0:01:26.213 *********** 2025-06-02 17:40:13.814259 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.814266 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.814272 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.814279 | orchestrator | 2025-06-02 17:40:13.814285 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-02 17:40:13.814292 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:00.457) 0:01:26.671 *********** 2025-06-02 17:40:13.814298 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814311 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814318 | orchestrator | 2025-06-02 17:40:13.814324 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-02 17:40:13.814331 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:00.407) 0:01:27.079 *********** 2025-06-02 17:40:13.814337 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814344 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814350 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814357 | orchestrator | 2025-06-02 17:40:13.814363 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-02 17:40:13.814370 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:00.369) 0:01:27.449 *********** 2025-06-02 17:40:13.814377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814383 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814390 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814396 | orchestrator | 2025-06-02 17:40:13.814403 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-02 17:40:13.814409 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:00.545) 0:01:27.994 *********** 2025-06-02 17:40:13.814428 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814435 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814448 | orchestrator | 2025-06-02 17:40:13.814455 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-02 17:40:13.814467 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:00.334) 0:01:28.329 *********** 2025-06-02 17:40:13.814474 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814480 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814494 | orchestrator | 2025-06-02 17:40:13.814500 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-02 17:40:13.814507 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:00.341) 0:01:28.670 *********** 2025-06-02 17:40:13.814514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.814520 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.814527 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.814533 | orchestrator | 2025-06-02 17:40:13.814540 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 17:40:13.814547 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:00.339) 0:01:29.010 *********** 2025-06-02 17:40:13.814554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814663 | orchestrator | 2025-06-02 17:40:13.814670 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 17:40:13.814677 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:01.838) 0:01:30.848 *********** 2025-06-02 17:40:13.814684 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814691 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814709 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814781 | orchestrator | 2025-06-02 17:40:13.814797 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 17:40:13.814809 | orchestrator | Monday 02 June 2025 17:39:16 +0000 (0:00:04.491) 0:01:35.340 *********** 2025-06-02 17:40:13.814820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814830 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.814919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815065 | orchestrator | 2025-06-02 17:40:13.815077 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.815089 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:02.170) 0:01:37.510 *********** 2025-06-02 17:40:13.815100 | orchestrator | 2025-06-02 17:40:13.815112 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.815122 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:00.075) 0:01:37.586 *********** 2025-06-02 17:40:13.815132 | orchestrator | 2025-06-02 17:40:13.815142 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.815154 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:00.100) 0:01:37.686 *********** 2025-06-02 17:40:13.815165 | orchestrator | 2025-06-02 17:40:13.815176 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 17:40:13.815187 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:00.099) 0:01:37.786 *********** 2025-06-02 17:40:13.815198 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.815205 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.815212 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.815218 | orchestrator | 2025-06-02 17:40:13.815225 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 17:40:13.815232 | orchestrator | Monday 02 June 2025 17:39:21 +0000 (0:00:02.637) 0:01:40.424 *********** 2025-06-02 17:40:13.815238 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.815245 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.815251 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.815258 | orchestrator | 2025-06-02 17:40:13.815264 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 17:40:13.815271 | orchestrator | Monday 02 June 2025 17:39:29 +0000 (0:00:07.826) 0:01:48.250 *********** 2025-06-02 17:40:13.815277 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.815284 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.815290 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.815297 | orchestrator | 2025-06-02 17:40:13.815304 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 17:40:13.815310 | orchestrator | Monday 02 June 2025 17:39:31 +0000 (0:00:02.713) 0:01:50.963 *********** 2025-06-02 17:40:13.815317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.815328 | orchestrator | 2025-06-02 17:40:13.815339 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 17:40:13.815348 | orchestrator | Monday 02 June 2025 17:39:31 +0000 (0:00:00.129) 0:01:51.093 *********** 2025-06-02 17:40:13.815358 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.815369 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.815379 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.815389 | orchestrator | 2025-06-02 17:40:13.815399 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 17:40:13.815463 | orchestrator | Monday 02 June 2025 17:39:32 +0000 (0:00:00.751) 0:01:51.845 *********** 2025-06-02 17:40:13.815477 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.815487 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.815497 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.815507 | orchestrator | 2025-06-02 17:40:13.815518 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 17:40:13.815529 | orchestrator | Monday 02 June 2025 17:39:33 +0000 (0:00:00.797) 0:01:52.642 *********** 2025-06-02 17:40:13.815549 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.815560 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.815571 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.815581 | orchestrator | 2025-06-02 17:40:13.815592 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 17:40:13.815604 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:00.744) 0:01:53.387 *********** 2025-06-02 17:40:13.815615 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.815626 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.815637 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.815648 | orchestrator | 2025-06-02 17:40:13.815660 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 17:40:13.815667 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:00.608) 0:01:53.995 *********** 2025-06-02 17:40:13.815674 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.815680 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.815697 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.815704 | orchestrator | 2025-06-02 17:40:13.815711 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 17:40:13.815717 | orchestrator | Monday 02 June 2025 17:39:35 +0000 (0:00:01.089) 0:01:55.085 *********** 2025-06-02 17:40:13.815723 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.815729 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.815735 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.815741 | orchestrator | 2025-06-02 17:40:13.815747 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-02 17:40:13.815753 | orchestrator | Monday 02 June 2025 17:39:38 +0000 (0:00:02.094) 0:01:57.180 *********** 2025-06-02 17:40:13.815760 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.815766 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.815772 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.815778 | orchestrator | 2025-06-02 17:40:13.815784 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-02 17:40:13.815790 | orchestrator | Monday 02 June 2025 17:39:38 +0000 (0:00:00.429) 0:01:57.610 *********** 2025-06-02 17:40:13.815797 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815804 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815811 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815818 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815827 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815839 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815850 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815857 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815868 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815875 | orchestrator | 2025-06-02 17:40:13.815881 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-02 17:40:13.815888 | orchestrator | Monday 02 June 2025 17:39:39 +0000 (0:00:01.448) 0:01:59.058 *********** 2025-06-02 17:40:13.815894 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815907 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815913 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815938 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815961 | orchestrator | 2025-06-02 17:40:13.815967 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-02 17:40:13.815973 | orchestrator | Monday 02 June 2025 17:39:45 +0000 (0:00:05.121) 0:02:04.180 *********** 2025-06-02 17:40:13.815985 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815992 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.815998 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816004 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816035 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816045 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:40:13.816051 | orchestrator | 2025-06-02 17:40:13.816058 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.816064 | orchestrator | Monday 02 June 2025 17:39:47 +0000 (0:00:02.815) 0:02:06.996 *********** 2025-06-02 17:40:13.816070 | orchestrator | 2025-06-02 17:40:13.816076 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.816082 | orchestrator | Monday 02 June 2025 17:39:47 +0000 (0:00:00.067) 0:02:07.063 *********** 2025-06-02 17:40:13.816088 | orchestrator | 2025-06-02 17:40:13.816095 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-02 17:40:13.816101 | orchestrator | Monday 02 June 2025 17:39:48 +0000 (0:00:00.077) 0:02:07.141 *********** 2025-06-02 17:40:13.816107 | orchestrator | 2025-06-02 17:40:13.816113 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-02 17:40:13.816119 | orchestrator | Monday 02 June 2025 17:39:48 +0000 (0:00:00.071) 0:02:07.213 *********** 2025-06-02 17:40:13.816125 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.816131 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.816138 | orchestrator | 2025-06-02 17:40:13.816148 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-02 17:40:13.816154 | orchestrator | Monday 02 June 2025 17:39:54 +0000 (0:00:06.331) 0:02:13.544 *********** 2025-06-02 17:40:13.816160 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.816167 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.816173 | orchestrator | 2025-06-02 17:40:13.816179 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-02 17:40:13.816185 | orchestrator | Monday 02 June 2025 17:40:00 +0000 (0:00:06.223) 0:02:19.767 *********** 2025-06-02 17:40:13.816191 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:40:13.816197 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:40:13.816203 | orchestrator | 2025-06-02 17:40:13.816210 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-02 17:40:13.816216 | orchestrator | Monday 02 June 2025 17:40:06 +0000 (0:00:06.262) 0:02:26.030 *********** 2025-06-02 17:40:13.816222 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:40:13.816228 | orchestrator | 2025-06-02 17:40:13.816234 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-02 17:40:13.816244 | orchestrator | Monday 02 June 2025 17:40:07 +0000 (0:00:00.121) 0:02:26.151 *********** 2025-06-02 17:40:13.816251 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.816257 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.816263 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.816269 | orchestrator | 2025-06-02 17:40:13.816275 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-02 17:40:13.816281 | orchestrator | Monday 02 June 2025 17:40:08 +0000 (0:00:01.197) 0:02:27.348 *********** 2025-06-02 17:40:13.816287 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.816294 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.816300 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.816306 | orchestrator | 2025-06-02 17:40:13.816312 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-02 17:40:13.816318 | orchestrator | Monday 02 June 2025 17:40:08 +0000 (0:00:00.693) 0:02:28.042 *********** 2025-06-02 17:40:13.816324 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.816330 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.816337 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.816343 | orchestrator | 2025-06-02 17:40:13.816349 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-02 17:40:13.816355 | orchestrator | Monday 02 June 2025 17:40:09 +0000 (0:00:00.886) 0:02:28.928 *********** 2025-06-02 17:40:13.816361 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:40:13.816367 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:40:13.816373 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:40:13.816379 | orchestrator | 2025-06-02 17:40:13.816386 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-02 17:40:13.816392 | orchestrator | Monday 02 June 2025 17:40:10 +0000 (0:00:00.697) 0:02:29.626 *********** 2025-06-02 17:40:13.816398 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.816404 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.816410 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.816430 | orchestrator | 2025-06-02 17:40:13.816436 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-02 17:40:13.816442 | orchestrator | Monday 02 June 2025 17:40:11 +0000 (0:00:01.001) 0:02:30.628 *********** 2025-06-02 17:40:13.816449 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:40:13.816455 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:40:13.816461 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:40:13.816467 | orchestrator | 2025-06-02 17:40:13.816473 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:40:13.816480 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 17:40:13.816487 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 17:40:13.816493 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-02 17:40:13.816499 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:40:13.816506 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:40:13.816515 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:40:13.816521 | orchestrator | 2025-06-02 17:40:13.816528 | orchestrator | 2025-06-02 17:40:13.816534 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:40:13.816540 | orchestrator | Monday 02 June 2025 17:40:12 +0000 (0:00:00.882) 0:02:31.511 *********** 2025-06-02 17:40:13.816546 | orchestrator | =============================================================================== 2025-06-02 17:40:13.816557 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 35.01s 2025-06-02 17:40:13.816563 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.83s 2025-06-02 17:40:13.816569 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.05s 2025-06-02 17:40:13.816575 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.98s 2025-06-02 17:40:13.816582 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.97s 2025-06-02 17:40:13.816588 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 5.12s 2025-06-02 17:40:13.816594 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.49s 2025-06-02 17:40:13.816604 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.82s 2025-06-02 17:40:13.816610 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.62s 2025-06-02 17:40:13.816616 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.61s 2025-06-02 17:40:13.816622 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.19s 2025-06-02 17:40:13.816629 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.17s 2025-06-02 17:40:13.816635 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 2.11s 2025-06-02 17:40:13.816641 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.11s 2025-06-02 17:40:13.816647 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 2.09s 2025-06-02 17:40:13.816653 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.05s 2025-06-02 17:40:13.816659 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.91s 2025-06-02 17:40:13.816665 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.84s 2025-06-02 17:40:13.816671 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.49s 2025-06-02 17:40:13.816678 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-06-02 17:40:13.816684 | orchestrator | 2025-06-02 17:40:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:16.851126 | orchestrator | 2025-06-02 17:40:16 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:16.853936 | orchestrator | 2025-06-02 17:40:16 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:16.854302 | orchestrator | 2025-06-02 17:40:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:19.902614 | orchestrator | 2025-06-02 17:40:19 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:19.904861 | orchestrator | 2025-06-02 17:40:19 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:19.904910 | orchestrator | 2025-06-02 17:40:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:22.952684 | orchestrator | 2025-06-02 17:40:22 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:22.954457 | orchestrator | 2025-06-02 17:40:22 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:22.954507 | orchestrator | 2025-06-02 17:40:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:26.004463 | orchestrator | 2025-06-02 17:40:26 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:26.005602 | orchestrator | 2025-06-02 17:40:26 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:26.005644 | orchestrator | 2025-06-02 17:40:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:29.051279 | orchestrator | 2025-06-02 17:40:29 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:29.053060 | orchestrator | 2025-06-02 17:40:29 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:29.054769 | orchestrator | 2025-06-02 17:40:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:32.111528 | orchestrator | 2025-06-02 17:40:32 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:32.111625 | orchestrator | 2025-06-02 17:40:32 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:32.111661 | orchestrator | 2025-06-02 17:40:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:35.164794 | orchestrator | 2025-06-02 17:40:35 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:35.167624 | orchestrator | 2025-06-02 17:40:35 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:35.167886 | orchestrator | 2025-06-02 17:40:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:38.231150 | orchestrator | 2025-06-02 17:40:38 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:38.232724 | orchestrator | 2025-06-02 17:40:38 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:38.232781 | orchestrator | 2025-06-02 17:40:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:41.272845 | orchestrator | 2025-06-02 17:40:41 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:41.274323 | orchestrator | 2025-06-02 17:40:41 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:41.274423 | orchestrator | 2025-06-02 17:40:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:44.315046 | orchestrator | 2025-06-02 17:40:44 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:44.315133 | orchestrator | 2025-06-02 17:40:44 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:44.315148 | orchestrator | 2025-06-02 17:40:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:47.347236 | orchestrator | 2025-06-02 17:40:47 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:47.350177 | orchestrator | 2025-06-02 17:40:47 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:47.350239 | orchestrator | 2025-06-02 17:40:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:50.390954 | orchestrator | 2025-06-02 17:40:50 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:50.391465 | orchestrator | 2025-06-02 17:40:50 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:50.391510 | orchestrator | 2025-06-02 17:40:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:53.447060 | orchestrator | 2025-06-02 17:40:53 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:53.451468 | orchestrator | 2025-06-02 17:40:53 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:53.451534 | orchestrator | 2025-06-02 17:40:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:56.511651 | orchestrator | 2025-06-02 17:40:56 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:56.511870 | orchestrator | 2025-06-02 17:40:56 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:56.511987 | orchestrator | 2025-06-02 17:40:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:40:59.560062 | orchestrator | 2025-06-02 17:40:59 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:40:59.560163 | orchestrator | 2025-06-02 17:40:59 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:40:59.560185 | orchestrator | 2025-06-02 17:40:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:02.607829 | orchestrator | 2025-06-02 17:41:02 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:02.607942 | orchestrator | 2025-06-02 17:41:02 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:02.607958 | orchestrator | 2025-06-02 17:41:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:05.652669 | orchestrator | 2025-06-02 17:41:05 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:05.652767 | orchestrator | 2025-06-02 17:41:05 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:05.652780 | orchestrator | 2025-06-02 17:41:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:08.691702 | orchestrator | 2025-06-02 17:41:08 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:08.691767 | orchestrator | 2025-06-02 17:41:08 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:08.691773 | orchestrator | 2025-06-02 17:41:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:11.746989 | orchestrator | 2025-06-02 17:41:11 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:11.747191 | orchestrator | 2025-06-02 17:41:11 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:11.747208 | orchestrator | 2025-06-02 17:41:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:14.783668 | orchestrator | 2025-06-02 17:41:14 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:14.785990 | orchestrator | 2025-06-02 17:41:14 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:14.786427 | orchestrator | 2025-06-02 17:41:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:17.834824 | orchestrator | 2025-06-02 17:41:17 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:17.835815 | orchestrator | 2025-06-02 17:41:17 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:17.835975 | orchestrator | 2025-06-02 17:41:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:20.879118 | orchestrator | 2025-06-02 17:41:20 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:20.881735 | orchestrator | 2025-06-02 17:41:20 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:20.881801 | orchestrator | 2025-06-02 17:41:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:23.940082 | orchestrator | 2025-06-02 17:41:23 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:23.940348 | orchestrator | 2025-06-02 17:41:23 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:23.940384 | orchestrator | 2025-06-02 17:41:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:26.990330 | orchestrator | 2025-06-02 17:41:26 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:26.990614 | orchestrator | 2025-06-02 17:41:26 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:26.990672 | orchestrator | 2025-06-02 17:41:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:30.050686 | orchestrator | 2025-06-02 17:41:30 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:30.052867 | orchestrator | 2025-06-02 17:41:30 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:30.052923 | orchestrator | 2025-06-02 17:41:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:33.102659 | orchestrator | 2025-06-02 17:41:33 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:33.104644 | orchestrator | 2025-06-02 17:41:33 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:33.104903 | orchestrator | 2025-06-02 17:41:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:36.153022 | orchestrator | 2025-06-02 17:41:36 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:36.155359 | orchestrator | 2025-06-02 17:41:36 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:36.155426 | orchestrator | 2025-06-02 17:41:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:39.209114 | orchestrator | 2025-06-02 17:41:39 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:39.210653 | orchestrator | 2025-06-02 17:41:39 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:39.210713 | orchestrator | 2025-06-02 17:41:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:42.261682 | orchestrator | 2025-06-02 17:41:42 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:42.262641 | orchestrator | 2025-06-02 17:41:42 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:42.262927 | orchestrator | 2025-06-02 17:41:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:45.325578 | orchestrator | 2025-06-02 17:41:45 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:45.325934 | orchestrator | 2025-06-02 17:41:45 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:45.326176 | orchestrator | 2025-06-02 17:41:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:48.381729 | orchestrator | 2025-06-02 17:41:48 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:48.381838 | orchestrator | 2025-06-02 17:41:48 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:48.381853 | orchestrator | 2025-06-02 17:41:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:51.420893 | orchestrator | 2025-06-02 17:41:51 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:51.421448 | orchestrator | 2025-06-02 17:41:51 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:51.422287 | orchestrator | 2025-06-02 17:41:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:54.484378 | orchestrator | 2025-06-02 17:41:54 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:54.484479 | orchestrator | 2025-06-02 17:41:54 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:54.484496 | orchestrator | 2025-06-02 17:41:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:41:57.533708 | orchestrator | 2025-06-02 17:41:57 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:41:57.535874 | orchestrator | 2025-06-02 17:41:57 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:41:57.535930 | orchestrator | 2025-06-02 17:41:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:00.589290 | orchestrator | 2025-06-02 17:42:00 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:00.589716 | orchestrator | 2025-06-02 17:42:00 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:00.589750 | orchestrator | 2025-06-02 17:42:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:03.634487 | orchestrator | 2025-06-02 17:42:03 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:03.634613 | orchestrator | 2025-06-02 17:42:03 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:03.635297 | orchestrator | 2025-06-02 17:42:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:06.687963 | orchestrator | 2025-06-02 17:42:06 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:06.688066 | orchestrator | 2025-06-02 17:42:06 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:06.688081 | orchestrator | 2025-06-02 17:42:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:09.732568 | orchestrator | 2025-06-02 17:42:09 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:09.733054 | orchestrator | 2025-06-02 17:42:09 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:09.733087 | orchestrator | 2025-06-02 17:42:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:12.786972 | orchestrator | 2025-06-02 17:42:12 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:12.788876 | orchestrator | 2025-06-02 17:42:12 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:12.788925 | orchestrator | 2025-06-02 17:42:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:15.846954 | orchestrator | 2025-06-02 17:42:15 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:15.847039 | orchestrator | 2025-06-02 17:42:15 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:15.847047 | orchestrator | 2025-06-02 17:42:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:18.898576 | orchestrator | 2025-06-02 17:42:18 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:18.900148 | orchestrator | 2025-06-02 17:42:18 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:18.900190 | orchestrator | 2025-06-02 17:42:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:21.955359 | orchestrator | 2025-06-02 17:42:21 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:21.960340 | orchestrator | 2025-06-02 17:42:21 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:21.960416 | orchestrator | 2025-06-02 17:42:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:25.011035 | orchestrator | 2025-06-02 17:42:25 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:25.013835 | orchestrator | 2025-06-02 17:42:25 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:25.013922 | orchestrator | 2025-06-02 17:42:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:28.075847 | orchestrator | 2025-06-02 17:42:28 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:28.076723 | orchestrator | 2025-06-02 17:42:28 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:28.076824 | orchestrator | 2025-06-02 17:42:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:31.122700 | orchestrator | 2025-06-02 17:42:31 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:31.123004 | orchestrator | 2025-06-02 17:42:31 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:31.123037 | orchestrator | 2025-06-02 17:42:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:34.159481 | orchestrator | 2025-06-02 17:42:34 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:34.159792 | orchestrator | 2025-06-02 17:42:34 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:34.160729 | orchestrator | 2025-06-02 17:42:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:37.206443 | orchestrator | 2025-06-02 17:42:37 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:37.207260 | orchestrator | 2025-06-02 17:42:37 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:37.209250 | orchestrator | 2025-06-02 17:42:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:40.249289 | orchestrator | 2025-06-02 17:42:40 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:40.251421 | orchestrator | 2025-06-02 17:42:40 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:40.251466 | orchestrator | 2025-06-02 17:42:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:43.296914 | orchestrator | 2025-06-02 17:42:43 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:43.299464 | orchestrator | 2025-06-02 17:42:43 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:43.301545 | orchestrator | 2025-06-02 17:42:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:46.354275 | orchestrator | 2025-06-02 17:42:46 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:46.356131 | orchestrator | 2025-06-02 17:42:46 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:46.356244 | orchestrator | 2025-06-02 17:42:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:49.415660 | orchestrator | 2025-06-02 17:42:49 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:49.417215 | orchestrator | 2025-06-02 17:42:49 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:49.417291 | orchestrator | 2025-06-02 17:42:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:52.464235 | orchestrator | 2025-06-02 17:42:52 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:52.465599 | orchestrator | 2025-06-02 17:42:52 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:52.465638 | orchestrator | 2025-06-02 17:42:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:55.506848 | orchestrator | 2025-06-02 17:42:55 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:55.507965 | orchestrator | 2025-06-02 17:42:55 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:55.508053 | orchestrator | 2025-06-02 17:42:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:42:58.553690 | orchestrator | 2025-06-02 17:42:58 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:42:58.556084 | orchestrator | 2025-06-02 17:42:58 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:42:58.556202 | orchestrator | 2025-06-02 17:42:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:01.605727 | orchestrator | 2025-06-02 17:43:01 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:01.607232 | orchestrator | 2025-06-02 17:43:01 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:43:01.607618 | orchestrator | 2025-06-02 17:43:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:04.664959 | orchestrator | 2025-06-02 17:43:04 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:04.670738 | orchestrator | 2025-06-02 17:43:04 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:43:04.670819 | orchestrator | 2025-06-02 17:43:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:07.717542 | orchestrator | 2025-06-02 17:43:07 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:07.720652 | orchestrator | 2025-06-02 17:43:07 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:43:07.720735 | orchestrator | 2025-06-02 17:43:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:10.762787 | orchestrator | 2025-06-02 17:43:10 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:10.763306 | orchestrator | 2025-06-02 17:43:10 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state STARTED 2025-06-02 17:43:10.763348 | orchestrator | 2025-06-02 17:43:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:13.818257 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:13.818839 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:13.830215 | orchestrator | 2025-06-02 17:43:13 | INFO  | Task 9d767c65-cbee-4fc6-be50-fe9644b74d76 is in state SUCCESS 2025-06-02 17:43:13.831664 | orchestrator | 2025-06-02 17:43:13.831695 | orchestrator | 2025-06-02 17:43:13.831704 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:43:13.831713 | orchestrator | 2025-06-02 17:43:13.831721 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:43:13.831729 | orchestrator | Monday 02 June 2025 17:36:24 +0000 (0:00:00.625) 0:00:00.628 *********** 2025-06-02 17:43:13.831736 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.831745 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.831752 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.831760 | orchestrator | 2025-06-02 17:43:13.831767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:43:13.831775 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:00.445) 0:00:01.074 *********** 2025-06-02 17:43:13.831784 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-02 17:43:13.831792 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-02 17:43:13.831799 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-02 17:43:13.831806 | orchestrator | 2025-06-02 17:43:13.831814 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-02 17:43:13.831821 | orchestrator | 2025-06-02 17:43:13.831828 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 17:43:13.831858 | orchestrator | Monday 02 June 2025 17:36:26 +0000 (0:00:01.016) 0:00:02.090 *********** 2025-06-02 17:43:13.831866 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.831873 | orchestrator | 2025-06-02 17:43:13.831880 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-02 17:43:13.831887 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:01.512) 0:00:03.603 *********** 2025-06-02 17:43:13.831894 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.831901 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.831908 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.831915 | orchestrator | 2025-06-02 17:43:13.831922 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 17:43:13.831930 | orchestrator | Monday 02 June 2025 17:36:29 +0000 (0:00:01.324) 0:00:04.927 *********** 2025-06-02 17:43:13.831937 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.831944 | orchestrator | 2025-06-02 17:43:13.831951 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-02 17:43:13.831958 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:01.556) 0:00:06.484 *********** 2025-06-02 17:43:13.831965 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.831972 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.831979 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.831986 | orchestrator | 2025-06-02 17:43:13.831993 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-02 17:43:13.832000 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:01.314) 0:00:07.798 *********** 2025-06-02 17:43:13.832008 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832015 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832022 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832029 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832036 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832043 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-02 17:43:13.832050 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:43:13.832070 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:43:13.832136 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-02 17:43:13.832144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:43:13.832151 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:43:13.832158 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-02 17:43:13.832166 | orchestrator | 2025-06-02 17:43:13.832173 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:43:13.832180 | orchestrator | Monday 02 June 2025 17:36:35 +0000 (0:00:03.759) 0:00:11.557 *********** 2025-06-02 17:43:13.832187 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 17:43:13.832195 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 17:43:13.832202 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 17:43:13.832210 | orchestrator | 2025-06-02 17:43:13.832217 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:43:13.832224 | orchestrator | Monday 02 June 2025 17:36:37 +0000 (0:00:01.417) 0:00:12.975 *********** 2025-06-02 17:43:13.832351 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-02 17:43:13.832368 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-02 17:43:13.832377 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-02 17:43:13.832386 | orchestrator | 2025-06-02 17:43:13.832395 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:43:13.832403 | orchestrator | Monday 02 June 2025 17:36:38 +0000 (0:00:01.698) 0:00:14.674 *********** 2025-06-02 17:43:13.832411 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-02 17:43:13.832420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.832439 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-02 17:43:13.832448 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.832456 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-02 17:43:13.832464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.832472 | orchestrator | 2025-06-02 17:43:13.832480 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-02 17:43:13.832489 | orchestrator | Monday 02 June 2025 17:36:39 +0000 (0:00:01.114) 0:00:15.788 *********** 2025-06-02 17:43:13.832500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.832575 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.832584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.832592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.832601 | orchestrator | 2025-06-02 17:43:13.832610 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-02 17:43:13.832619 | orchestrator | Monday 02 June 2025 17:36:43 +0000 (0:00:03.532) 0:00:19.320 *********** 2025-06-02 17:43:13.832627 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.832634 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.832641 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.832648 | orchestrator | 2025-06-02 17:43:13.832656 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-02 17:43:13.832663 | orchestrator | Monday 02 June 2025 17:36:45 +0000 (0:00:02.083) 0:00:21.404 *********** 2025-06-02 17:43:13.832670 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-02 17:43:13.832677 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-02 17:43:13.832685 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-02 17:43:13.832692 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-02 17:43:13.832718 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-02 17:43:13.832726 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-02 17:43:13.832734 | orchestrator | 2025-06-02 17:43:13.832741 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-02 17:43:13.832749 | orchestrator | Monday 02 June 2025 17:36:48 +0000 (0:00:02.519) 0:00:23.924 *********** 2025-06-02 17:43:13.832786 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.832795 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.832802 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.832810 | orchestrator | 2025-06-02 17:43:13.832817 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-02 17:43:13.832828 | orchestrator | Monday 02 June 2025 17:36:50 +0000 (0:00:02.665) 0:00:26.589 *********** 2025-06-02 17:43:13.832836 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.832843 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.832850 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.832858 | orchestrator | 2025-06-02 17:43:13.832887 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-02 17:43:13.832896 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:02.028) 0:00:28.618 *********** 2025-06-02 17:43:13.832903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.832918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.832926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.832955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.832964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.832995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.833016 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.833024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833031 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.833046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.833055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.833062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.833070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.833089 | orchestrator | 2025-06-02 17:43:13.833130 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-02 17:43:13.833138 | orchestrator | Monday 02 June 2025 17:36:53 +0000 (0:00:00.994) 0:00:29.612 *********** 2025-06-02 17:43:13.833150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.833201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.833224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.833252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d', '__omit_place_holder__4e95e63a57cf5a1e8ac656927281f5a4bf766b1d'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-02 17:43:13.833264 | orchestrator | 2025-06-02 17:43:13.833283 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-02 17:43:13.833291 | orchestrator | Monday 02 June 2025 17:36:57 +0000 (0:00:03.609) 0:00:33.222 *********** 2025-06-02 17:43:13.833298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.833604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.833643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.833674 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.833688 | orchestrator | 2025-06-02 17:43:13.833701 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-02 17:43:13.833714 | orchestrator | Monday 02 June 2025 17:37:01 +0000 (0:00:04.280) 0:00:37.503 *********** 2025-06-02 17:43:13.833727 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:43:13.833741 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:43:13.833753 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-02 17:43:13.833766 | orchestrator | 2025-06-02 17:43:13.833779 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-02 17:43:13.833792 | orchestrator | Monday 02 June 2025 17:37:03 +0000 (0:00:02.080) 0:00:39.583 *********** 2025-06-02 17:43:13.833805 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:43:13.833816 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:43:13.833828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-02 17:43:13.833842 | orchestrator | 2025-06-02 17:43:13.834987 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-02 17:43:13.835175 | orchestrator | Monday 02 June 2025 17:37:09 +0000 (0:00:05.658) 0:00:45.242 *********** 2025-06-02 17:43:13.835207 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.835224 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.835241 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.835258 | orchestrator | 2025-06-02 17:43:13.835276 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-02 17:43:13.835312 | orchestrator | Monday 02 June 2025 17:37:10 +0000 (0:00:01.282) 0:00:46.524 *********** 2025-06-02 17:43:13.835330 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:43:13.835347 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:43:13.835363 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-02 17:43:13.835377 | orchestrator | 2025-06-02 17:43:13.835392 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-02 17:43:13.835408 | orchestrator | Monday 02 June 2025 17:37:14 +0000 (0:00:04.105) 0:00:50.629 *********** 2025-06-02 17:43:13.835423 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:43:13.835438 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:43:13.835453 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-02 17:43:13.835468 | orchestrator | 2025-06-02 17:43:13.835482 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-02 17:43:13.835496 | orchestrator | Monday 02 June 2025 17:37:17 +0000 (0:00:02.766) 0:00:53.396 *********** 2025-06-02 17:43:13.835510 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-02 17:43:13.835563 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-02 17:43:13.835579 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-02 17:43:13.835595 | orchestrator | 2025-06-02 17:43:13.835611 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-02 17:43:13.835628 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:01.556) 0:00:54.952 *********** 2025-06-02 17:43:13.835645 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-02 17:43:13.835662 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-02 17:43:13.835679 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-02 17:43:13.835695 | orchestrator | 2025-06-02 17:43:13.835711 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-02 17:43:13.835883 | orchestrator | Monday 02 June 2025 17:37:20 +0000 (0:00:01.623) 0:00:56.575 *********** 2025-06-02 17:43:13.835901 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.835917 | orchestrator | 2025-06-02 17:43:13.835933 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-02 17:43:13.835949 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:01.091) 0:00:57.667 *********** 2025-06-02 17:43:13.835976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.836171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.836195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.836210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.836234 | orchestrator | 2025-06-02 17:43:13.836248 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-02 17:43:13.836263 | orchestrator | Monday 02 June 2025 17:37:25 +0000 (0:00:03.334) 0:01:01.002 *********** 2025-06-02 17:43:13.836289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836324 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.836333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836459 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.836471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836534 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.836543 | orchestrator | 2025-06-02 17:43:13.836552 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-02 17:43:13.836561 | orchestrator | Monday 02 June 2025 17:37:25 +0000 (0:00:00.651) 0:01:01.654 *********** 2025-06-02 17:43:13.836570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836609 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.836619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836653 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.836662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836695 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.836703 | orchestrator | 2025-06-02 17:43:13.836716 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 17:43:13.836725 | orchestrator | Monday 02 June 2025 17:37:27 +0000 (0:00:01.425) 0:01:03.079 *********** 2025-06-02 17:43:13.836734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.836777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836877 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.836895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.836920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.836930 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.836938 | orchestrator | 2025-06-02 17:43:13.836947 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 17:43:13.836957 | orchestrator | Monday 02 June 2025 17:37:28 +0000 (0:00:01.517) 0:01:04.597 *********** 2025-06-02 17:43:13.836972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.836988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837025 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.837040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837152 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.837168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.837177 | orchestrator | 2025-06-02 17:43:13.837185 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 17:43:13.837194 | orchestrator | Monday 02 June 2025 17:37:30 +0000 (0:00:01.377) 0:01:05.975 *********** 2025-06-02 17:43:13.837203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837235 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.837250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837283 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.837292 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837323 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.837332 | orchestrator | 2025-06-02 17:43:13.837340 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-02 17:43:13.837397 | orchestrator | Monday 02 June 2025 17:37:31 +0000 (0:00:01.535) 0:01:07.510 *********** 2025-06-02 17:43:13.837408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837527 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.837571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.837609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837638 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.837646 | orchestrator | 2025-06-02 17:43:13.837654 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-02 17:43:13.837667 | orchestrator | Monday 02 June 2025 17:37:32 +0000 (0:00:00.632) 0:01:08.143 *********** 2025-06-02 17:43:13.837675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837700 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.837711 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.837772 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.837810 | orchestrator | 2025-06-02 17:43:13.837818 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-02 17:43:13.837826 | orchestrator | Monday 02 June 2025 17:37:32 +0000 (0:00:00.655) 0:01:08.799 *********** 2025-06-02 17:43:13.837838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837863 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.837876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.837929 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.837938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-02 17:43:13.837950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-02 17:43:13.837961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-02 17:43:13.838003 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.838117 | orchestrator | 2025-06-02 17:43:13.838131 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-02 17:43:13.838139 | orchestrator | Monday 02 June 2025 17:37:34 +0000 (0:00:01.389) 0:01:10.188 *********** 2025-06-02 17:43:13.838147 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:43:13.838190 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:43:13.838206 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-02 17:43:13.838214 | orchestrator | 2025-06-02 17:43:13.838222 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-02 17:43:13.838289 | orchestrator | Monday 02 June 2025 17:37:35 +0000 (0:00:01.443) 0:01:11.632 *********** 2025-06-02 17:43:13.838297 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:43:13.838305 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:43:13.838313 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-02 17:43:13.838321 | orchestrator | 2025-06-02 17:43:13.838329 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-02 17:43:13.838337 | orchestrator | Monday 02 June 2025 17:37:37 +0000 (0:00:01.555) 0:01:13.187 *********** 2025-06-02 17:43:13.838345 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:43:13.838353 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:43:13.838361 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:43:13.838368 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.838376 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:43:13.838384 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:43:13.838392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.838400 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:43:13.838408 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.838416 | orchestrator | 2025-06-02 17:43:13.838423 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-02 17:43:13.838431 | orchestrator | Monday 02 June 2025 17:37:38 +0000 (0:00:01.511) 0:01:14.698 *********** 2025-06-02 17:43:13.838440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-02 17:43:13.838507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.838515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.838526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-02 17:43:13.838535 | orchestrator | 2025-06-02 17:43:13.838548 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-02 17:43:13.838556 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:02.990) 0:01:17.689 *********** 2025-06-02 17:43:13.838564 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.838571 | orchestrator | 2025-06-02 17:43:13.838579 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-02 17:43:13.838587 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:01.132) 0:01:18.822 *********** 2025-06-02 17:43:13.838596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:43:13.838612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:43:13.838655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-02 17:43:13.838676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838763 | orchestrator | 2025-06-02 17:43:13.838771 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-02 17:43:13.838779 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:06.121) 0:01:24.944 *********** 2025-06-02 17:43:13.838787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:43:13.838824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838833 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838849 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.838882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:43:13.838901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838926 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.838940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-02 17:43:13.838949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.838959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.838993 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.839006 | orchestrator | 2025-06-02 17:43:13.839019 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-02 17:43:13.839038 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:01.171) 0:01:26.116 *********** 2025-06-02 17:43:13.839052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.839333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839349 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.839358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-02 17:43:13.839374 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.839382 | orchestrator | 2025-06-02 17:43:13.839400 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-02 17:43:13.839408 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:01.901) 0:01:28.018 *********** 2025-06-02 17:43:13.839416 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.839424 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.839432 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.839440 | orchestrator | 2025-06-02 17:43:13.839448 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-02 17:43:13.839455 | orchestrator | Monday 02 June 2025 17:37:53 +0000 (0:00:01.600) 0:01:29.618 *********** 2025-06-02 17:43:13.839463 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.839471 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.839479 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.839487 | orchestrator | 2025-06-02 17:43:13.839495 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-02 17:43:13.839503 | orchestrator | Monday 02 June 2025 17:37:55 +0000 (0:00:02.250) 0:01:31.868 *********** 2025-06-02 17:43:13.839510 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.839518 | orchestrator | 2025-06-02 17:43:13.839526 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-02 17:43:13.839534 | orchestrator | Monday 02 June 2025 17:37:56 +0000 (0:00:00.746) 0:01:32.614 *********** 2025-06-02 17:43:13.839553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.839563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.839600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.839638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839654 | orchestrator | 2025-06-02 17:43:13.839663 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-02 17:43:13.839671 | orchestrator | Monday 02 June 2025 17:38:03 +0000 (0:00:06.519) 0:01:39.134 *********** 2025-06-02 17:43:13.839685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.839694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839716 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.839724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.839736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839753 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.839767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.839781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.839797 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.839805 | orchestrator | 2025-06-02 17:43:13.839813 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-02 17:43:13.839821 | orchestrator | Monday 02 June 2025 17:38:04 +0000 (0:00:00.962) 0:01:40.097 *********** 2025-06-02 17:43:13.839829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.839858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839874 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.839882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-02 17:43:13.839898 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.839906 | orchestrator | 2025-06-02 17:43:13.839914 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-02 17:43:13.839947 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:01.098) 0:01:41.196 *********** 2025-06-02 17:43:13.839969 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.839977 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.839993 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.840007 | orchestrator | 2025-06-02 17:43:13.840015 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-02 17:43:13.840023 | orchestrator | Monday 02 June 2025 17:38:07 +0000 (0:00:01.859) 0:01:43.055 *********** 2025-06-02 17:43:13.840031 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.840039 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.840047 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.840055 | orchestrator | 2025-06-02 17:43:13.840067 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-02 17:43:13.840075 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:02.070) 0:01:45.126 *********** 2025-06-02 17:43:13.840083 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840091 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840119 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840127 | orchestrator | 2025-06-02 17:43:13.840135 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-02 17:43:13.840143 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:00.312) 0:01:45.438 *********** 2025-06-02 17:43:13.840151 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.840159 | orchestrator | 2025-06-02 17:43:13.840167 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-02 17:43:13.840175 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:00.646) 0:01:46.085 *********** 2025-06-02 17:43:13.840183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:43:13.840194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:43:13.840207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-02 17:43:13.840220 | orchestrator | 2025-06-02 17:43:13.840228 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-02 17:43:13.840236 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:03.891) 0:01:49.976 *********** 2025-06-02 17:43:13.840250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:43:13.840259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:43:13.840275 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-02 17:43:13.840291 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840299 | orchestrator | 2025-06-02 17:43:13.840307 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-02 17:43:13.840315 | orchestrator | Monday 02 June 2025 17:38:16 +0000 (0:00:02.721) 0:01:52.698 *********** 2025-06-02 17:43:13.840324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840355 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840380 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-02 17:43:13.840410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840418 | orchestrator | 2025-06-02 17:43:13.840426 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-02 17:43:13.840434 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:02.379) 0:01:55.077 *********** 2025-06-02 17:43:13.840529 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840538 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840546 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840553 | orchestrator | 2025-06-02 17:43:13.840561 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-02 17:43:13.840569 | orchestrator | Monday 02 June 2025 17:38:20 +0000 (0:00:01.363) 0:01:56.440 *********** 2025-06-02 17:43:13.840577 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840585 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840593 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840600 | orchestrator | 2025-06-02 17:43:13.840608 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-02 17:43:13.840616 | orchestrator | Monday 02 June 2025 17:38:22 +0000 (0:00:01.538) 0:01:57.979 *********** 2025-06-02 17:43:13.840624 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.840632 | orchestrator | 2025-06-02 17:43:13.840640 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-02 17:43:13.840647 | orchestrator | Monday 02 June 2025 17:38:22 +0000 (0:00:00.842) 0:01:58.822 *********** 2025-06-02 17:43:13.840656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.840675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840697 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840707 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.840715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.840762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840770 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840795 | orchestrator | 2025-06-02 17:43:13.840803 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-02 17:43:13.840810 | orchestrator | Monday 02 June 2025 17:38:27 +0000 (0:00:04.929) 0:02:03.752 *********** 2025-06-02 17:43:13.840823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.840831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840866 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.840874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.840886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840916 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.840924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.840937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.840976 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.840984 | orchestrator | 2025-06-02 17:43:13.840992 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-02 17:43:13.841000 | orchestrator | Monday 02 June 2025 17:38:29 +0000 (0:00:01.761) 0:02:05.513 *********** 2025-06-02 17:43:13.841008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.841037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.841061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-02 17:43:13.841082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.841090 | orchestrator | 2025-06-02 17:43:13.841139 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-02 17:43:13.841147 | orchestrator | Monday 02 June 2025 17:38:30 +0000 (0:00:00.898) 0:02:06.411 *********** 2025-06-02 17:43:13.841155 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.841163 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.841171 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.841179 | orchestrator | 2025-06-02 17:43:13.841187 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-02 17:43:13.841195 | orchestrator | Monday 02 June 2025 17:38:31 +0000 (0:00:01.356) 0:02:07.768 *********** 2025-06-02 17:43:13.841203 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.841210 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.841218 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.841226 | orchestrator | 2025-06-02 17:43:13.841234 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-02 17:43:13.841242 | orchestrator | Monday 02 June 2025 17:38:33 +0000 (0:00:02.016) 0:02:09.784 *********** 2025-06-02 17:43:13.841249 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.841257 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.841265 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.841273 | orchestrator | 2025-06-02 17:43:13.841281 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-02 17:43:13.841289 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:00.511) 0:02:10.296 *********** 2025-06-02 17:43:13.841297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.841305 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.841313 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.841340 | orchestrator | 2025-06-02 17:43:13.841348 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-02 17:43:13.841356 | orchestrator | Monday 02 June 2025 17:38:34 +0000 (0:00:00.299) 0:02:10.595 *********** 2025-06-02 17:43:13.841364 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.841371 | orchestrator | 2025-06-02 17:43:13.841379 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-02 17:43:13.841387 | orchestrator | Monday 02 June 2025 17:38:35 +0000 (0:00:00.783) 0:02:11.378 *********** 2025-06-02 17:43:13.841400 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:43:13.841415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841501 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841510 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:43:13.841549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:43:13.841619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841676 | orchestrator | 2025-06-02 17:43:13.841684 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-02 17:43:13.841692 | orchestrator | Monday 02 June 2025 17:38:39 +0000 (0:00:03.933) 0:02:15.312 *********** 2025-06-02 17:43:13.841705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:43:13.841713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:43:13.841770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.841803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.841865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:43:13.841873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:43:13.841885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.841950 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.841957 | orchestrator | 2025-06-02 17:43:13.841965 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-02 17:43:13.841974 | orchestrator | Monday 02 June 2025 17:38:40 +0000 (0:00:00.887) 0:02:16.200 *********** 2025-06-02 17:43:13.841982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.841990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.841999 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.842007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.842045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.842053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.842061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.842076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-02 17:43:13.842084 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.842091 | orchestrator | 2025-06-02 17:43:13.842117 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-02 17:43:13.842125 | orchestrator | Monday 02 June 2025 17:38:41 +0000 (0:00:01.017) 0:02:17.218 *********** 2025-06-02 17:43:13.842133 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.842141 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.842149 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.842156 | orchestrator | 2025-06-02 17:43:13.842164 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-02 17:43:13.842172 | orchestrator | Monday 02 June 2025 17:38:43 +0000 (0:00:01.799) 0:02:19.017 *********** 2025-06-02 17:43:13.842180 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.842188 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.842196 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.842204 | orchestrator | 2025-06-02 17:43:13.842211 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-02 17:43:13.842219 | orchestrator | Monday 02 June 2025 17:38:45 +0000 (0:00:02.114) 0:02:21.132 *********** 2025-06-02 17:43:13.842227 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.842235 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.842282 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.842290 | orchestrator | 2025-06-02 17:43:13.842355 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-02 17:43:13.842364 | orchestrator | Monday 02 June 2025 17:38:45 +0000 (0:00:00.346) 0:02:21.478 *********** 2025-06-02 17:43:13.842372 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.842380 | orchestrator | 2025-06-02 17:43:13.842413 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-02 17:43:13.842422 | orchestrator | Monday 02 June 2025 17:38:46 +0000 (0:00:00.908) 0:02:22.387 *********** 2025-06-02 17:43:13.842441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:43:13.842474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:43:13.842508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:43:13.842531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842554 | orchestrator | 2025-06-02 17:43:13.842562 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-02 17:43:13.842570 | orchestrator | Monday 02 June 2025 17:38:50 +0000 (0:00:04.026) 0:02:26.413 *********** 2025-06-02 17:43:13.842588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:43:13.842598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842612 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.842627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:43:13.842643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.842669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:43:13.842684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.842699 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.842707 | orchestrator | 2025-06-02 17:43:13.842715 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-02 17:43:13.842723 | orchestrator | Monday 02 June 2025 17:38:53 +0000 (0:00:02.963) 0:02:29.376 *********** 2025-06-02 17:43:13.842732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842750 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.842761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842778 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.842786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-02 17:43:13.842809 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.842817 | orchestrator | 2025-06-02 17:43:13.842825 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-02 17:43:13.842833 | orchestrator | Monday 02 June 2025 17:38:56 +0000 (0:00:03.258) 0:02:32.635 *********** 2025-06-02 17:43:13.842847 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.842855 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.842863 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.842871 | orchestrator | 2025-06-02 17:43:13.842879 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-02 17:43:13.842887 | orchestrator | Monday 02 June 2025 17:38:58 +0000 (0:00:01.443) 0:02:34.079 *********** 2025-06-02 17:43:13.842895 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.842903 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.842910 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.842918 | orchestrator | 2025-06-02 17:43:13.842926 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-02 17:43:13.842934 | orchestrator | Monday 02 June 2025 17:39:00 +0000 (0:00:02.010) 0:02:36.089 *********** 2025-06-02 17:43:13.842942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.842950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.842957 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.842965 | orchestrator | 2025-06-02 17:43:13.842973 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-02 17:43:13.842981 | orchestrator | Monday 02 June 2025 17:39:00 +0000 (0:00:00.335) 0:02:36.425 *********** 2025-06-02 17:43:13.842988 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.842996 | orchestrator | 2025-06-02 17:43:13.843004 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-02 17:43:13.843012 | orchestrator | Monday 02 June 2025 17:39:01 +0000 (0:00:00.874) 0:02:37.300 *********** 2025-06-02 17:43:13.843020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:43:13.843032 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:43:13.843041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:43:13.843049 | orchestrator | 2025-06-02 17:43:13.843057 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-02 17:43:13.843065 | orchestrator | Monday 02 June 2025 17:39:04 +0000 (0:00:03.501) 0:02:40.801 *********** 2025-06-02 17:43:13.843079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:43:13.843093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:43:13.843146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.843155 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.843163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:43:13.843171 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.843179 | orchestrator | 2025-06-02 17:43:13.843187 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-02 17:43:13.843195 | orchestrator | Monday 02 June 2025 17:39:05 +0000 (0:00:00.413) 0:02:41.214 *********** 2025-06-02 17:43:13.843203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843219 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.843227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843247 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.843255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-02 17:43:13.843271 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.843285 | orchestrator | 2025-06-02 17:43:13.843293 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-02 17:43:13.843301 | orchestrator | Monday 02 June 2025 17:39:05 +0000 (0:00:00.665) 0:02:41.879 *********** 2025-06-02 17:43:13.843308 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.843316 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.843324 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.843332 | orchestrator | 2025-06-02 17:43:13.843340 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-02 17:43:13.843348 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:01.596) 0:02:43.476 *********** 2025-06-02 17:43:13.843355 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.843363 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.843371 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.843379 | orchestrator | 2025-06-02 17:43:13.843386 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-02 17:43:13.843394 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:02.237) 0:02:45.714 *********** 2025-06-02 17:43:13.843402 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.843410 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.843423 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.843430 | orchestrator | 2025-06-02 17:43:13.843438 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-02 17:43:13.843446 | orchestrator | Monday 02 June 2025 17:39:10 +0000 (0:00:00.335) 0:02:46.049 *********** 2025-06-02 17:43:13.843454 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.843462 | orchestrator | 2025-06-02 17:43:13.843469 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-02 17:43:13.843477 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:00.981) 0:02:47.030 *********** 2025-06-02 17:43:13.843486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:43:13.843510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:43:13.843524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:43:13.843538 | orchestrator | 2025-06-02 17:43:13.843546 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-02 17:43:13.843554 | orchestrator | Monday 02 June 2025 17:39:16 +0000 (0:00:05.210) 0:02:52.241 *********** 2025-06-02 17:43:13.843569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:43:13.843578 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.843592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:43:13.843609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.843623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:43:13.843632 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.843640 | orchestrator | 2025-06-02 17:43:13.843648 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-02 17:43:13.843656 | orchestrator | Monday 02 June 2025 17:39:17 +0000 (0:00:00.915) 0:02:53.157 *********** 2025-06-02 17:43:13.843664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:43:13.843744 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.843755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:43:13.843829 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.843837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-02 17:43:13.843845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-02 17:43:13.843859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-02 17:43:13.843866 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.843874 | orchestrator | 2025-06-02 17:43:13.843882 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-02 17:43:13.843890 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:01.018) 0:02:54.176 *********** 2025-06-02 17:43:13.843898 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.843906 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.843914 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.843921 | orchestrator | 2025-06-02 17:43:13.843929 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-02 17:43:13.843937 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:01.873) 0:02:56.050 *********** 2025-06-02 17:43:13.843945 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.843952 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.843960 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.843968 | orchestrator | 2025-06-02 17:43:13.843976 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-02 17:43:13.843988 | orchestrator | Monday 02 June 2025 17:39:22 +0000 (0:00:02.170) 0:02:58.220 *********** 2025-06-02 17:43:13.843996 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.844004 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.844011 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.844019 | orchestrator | 2025-06-02 17:43:13.844027 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-02 17:43:13.844035 | orchestrator | Monday 02 June 2025 17:39:22 +0000 (0:00:00.314) 0:02:58.535 *********** 2025-06-02 17:43:13.844043 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.844051 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.844058 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.844066 | orchestrator | 2025-06-02 17:43:13.844074 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-02 17:43:13.844081 | orchestrator | Monday 02 June 2025 17:39:22 +0000 (0:00:00.350) 0:02:58.885 *********** 2025-06-02 17:43:13.844089 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.844222 | orchestrator | 2025-06-02 17:43:13.844246 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-02 17:43:13.844254 | orchestrator | Monday 02 June 2025 17:39:24 +0000 (0:00:01.235) 0:03:00.120 *********** 2025-06-02 17:43:13.844272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:43:13.844281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:43:13.844316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:43:13.844347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844361 | orchestrator | 2025-06-02 17:43:13.844368 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-02 17:43:13.844375 | orchestrator | Monday 02 June 2025 17:39:27 +0000 (0:00:03.465) 0:03:03.585 *********** 2025-06-02 17:43:13.844385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:43:13.844393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844420 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.844428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:43:13.844435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844452 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.844459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:43:13.844732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:43:13.844756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:43:13.844763 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.844770 | orchestrator | 2025-06-02 17:43:13.844777 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-02 17:43:13.844784 | orchestrator | Monday 02 June 2025 17:39:28 +0000 (0:00:00.624) 0:03:04.210 *********** 2025-06-02 17:43:13.844791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844807 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.844814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-02 17:43:13.844847 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.844854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.844861 | orchestrator | 2025-06-02 17:43:13.844868 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-02 17:43:13.844874 | orchestrator | Monday 02 June 2025 17:39:29 +0000 (0:00:01.167) 0:03:05.377 *********** 2025-06-02 17:43:13.844881 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.844887 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.844894 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.844900 | orchestrator | 2025-06-02 17:43:13.844907 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-02 17:43:13.844914 | orchestrator | Monday 02 June 2025 17:39:30 +0000 (0:00:01.378) 0:03:06.756 *********** 2025-06-02 17:43:13.844920 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.844927 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.844933 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.844940 | orchestrator | 2025-06-02 17:43:13.844946 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-02 17:43:13.844957 | orchestrator | Monday 02 June 2025 17:39:32 +0000 (0:00:02.101) 0:03:08.858 *********** 2025-06-02 17:43:13.844964 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.844970 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.844977 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.844983 | orchestrator | 2025-06-02 17:43:13.844990 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-02 17:43:13.844996 | orchestrator | Monday 02 June 2025 17:39:33 +0000 (0:00:00.319) 0:03:09.178 *********** 2025-06-02 17:43:13.845003 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.845009 | orchestrator | 2025-06-02 17:43:13.845016 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-02 17:43:13.845023 | orchestrator | Monday 02 June 2025 17:39:34 +0000 (0:00:01.204) 0:03:10.382 *********** 2025-06-02 17:43:13.845035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:43:13.845044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:43:13.845063 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:43:13.845089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845117 | orchestrator | 2025-06-02 17:43:13.845125 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-02 17:43:13.845132 | orchestrator | Monday 02 June 2025 17:39:39 +0000 (0:00:04.896) 0:03:15.279 *********** 2025-06-02 17:43:13.845139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:43:13.845146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.845164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:43:13.845180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845187 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.845194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:43:13.845201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845208 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.845214 | orchestrator | 2025-06-02 17:43:13.845221 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-02 17:43:13.845238 | orchestrator | Monday 02 June 2025 17:39:40 +0000 (0:00:00.852) 0:03:16.131 *********** 2025-06-02 17:43:13.845246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845281 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.845291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845298 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.845305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-02 17:43:13.845318 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.845325 | orchestrator | 2025-06-02 17:43:13.845332 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-02 17:43:13.845339 | orchestrator | Monday 02 June 2025 17:39:42 +0000 (0:00:02.396) 0:03:18.528 *********** 2025-06-02 17:43:13.845346 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.845354 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.845361 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.845369 | orchestrator | 2025-06-02 17:43:13.845376 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-02 17:43:13.845384 | orchestrator | Monday 02 June 2025 17:39:44 +0000 (0:00:01.614) 0:03:20.142 *********** 2025-06-02 17:43:13.845391 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.845399 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.845450 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.845460 | orchestrator | 2025-06-02 17:43:13.845467 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-02 17:43:13.845475 | orchestrator | Monday 02 June 2025 17:39:46 +0000 (0:00:02.171) 0:03:22.314 *********** 2025-06-02 17:43:13.845487 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.845495 | orchestrator | 2025-06-02 17:43:13.845503 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-02 17:43:13.845510 | orchestrator | Monday 02 June 2025 17:39:47 +0000 (0:00:01.086) 0:03:23.401 *********** 2025-06-02 17:43:13.845518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:43:13.845526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:43:13.845657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-02 17:43:13.845674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845757 | orchestrator | 2025-06-02 17:43:13.845763 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-02 17:43:13.845770 | orchestrator | Monday 02 June 2025 17:39:51 +0000 (0:00:03.740) 0:03:27.141 *********** 2025-06-02 17:43:13.845777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:43:13.845784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.845820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:43:13.845832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845857 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.845864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-02 17:43:13.845871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.845897 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.845904 | orchestrator | 2025-06-02 17:43:13.845910 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-02 17:43:13.845917 | orchestrator | Monday 02 June 2025 17:39:51 +0000 (0:00:00.680) 0:03:27.822 *********** 2025-06-02 17:43:13.845924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.845949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845962 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.845968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-02 17:43:13.845982 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.845989 | orchestrator | 2025-06-02 17:43:13.845995 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-02 17:43:13.846052 | orchestrator | Monday 02 June 2025 17:39:52 +0000 (0:00:00.905) 0:03:28.728 *********** 2025-06-02 17:43:13.846062 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.846069 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.846075 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.846082 | orchestrator | 2025-06-02 17:43:13.846089 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-02 17:43:13.846128 | orchestrator | Monday 02 June 2025 17:39:54 +0000 (0:00:01.677) 0:03:30.406 *********** 2025-06-02 17:43:13.846135 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.846142 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.846148 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.846155 | orchestrator | 2025-06-02 17:43:13.846162 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-02 17:43:13.846168 | orchestrator | Monday 02 June 2025 17:39:56 +0000 (0:00:02.072) 0:03:32.478 *********** 2025-06-02 17:43:13.846175 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.846182 | orchestrator | 2025-06-02 17:43:13.846188 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-02 17:43:13.846195 | orchestrator | Monday 02 June 2025 17:39:57 +0000 (0:00:01.060) 0:03:33.539 *********** 2025-06-02 17:43:13.846202 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:43:13.846209 | orchestrator | 2025-06-02 17:43:13.846219 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-02 17:43:13.846226 | orchestrator | Monday 02 June 2025 17:40:00 +0000 (0:00:03.113) 0:03:36.652 *********** 2025-06-02 17:43:13.846241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846265 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846290 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846331 | orchestrator | 2025-06-02 17:43:13.846338 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-02 17:43:13.846345 | orchestrator | Monday 02 June 2025 17:40:03 +0000 (0:00:02.489) 0:03:39.142 *********** 2025-06-02 17:43:13.846355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:43:13.846428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-02 17:43:13.846435 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846442 | orchestrator | 2025-06-02 17:43:13.846449 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-02 17:43:13.846455 | orchestrator | Monday 02 June 2025 17:40:05 +0000 (0:00:02.116) 0:03:41.258 *********** 2025-06-02 17:43:13.846462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846536 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-02 17:43:13.846562 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846569 | orchestrator | 2025-06-02 17:43:13.846575 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-02 17:43:13.846582 | orchestrator | Monday 02 June 2025 17:40:07 +0000 (0:00:02.578) 0:03:43.836 *********** 2025-06-02 17:43:13.846588 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.846595 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.846602 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.846608 | orchestrator | 2025-06-02 17:43:13.846615 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-02 17:43:13.846621 | orchestrator | Monday 02 June 2025 17:40:10 +0000 (0:00:02.304) 0:03:46.140 *********** 2025-06-02 17:43:13.846628 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846634 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846641 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846648 | orchestrator | 2025-06-02 17:43:13.846654 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-02 17:43:13.846661 | orchestrator | Monday 02 June 2025 17:40:11 +0000 (0:00:01.566) 0:03:47.707 *********** 2025-06-02 17:43:13.846667 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846674 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846680 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846687 | orchestrator | 2025-06-02 17:43:13.846693 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-02 17:43:13.846700 | orchestrator | Monday 02 June 2025 17:40:12 +0000 (0:00:00.328) 0:03:48.036 *********** 2025-06-02 17:43:13.846707 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.846713 | orchestrator | 2025-06-02 17:43:13.846720 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-02 17:43:13.846730 | orchestrator | Monday 02 June 2025 17:40:13 +0000 (0:00:01.091) 0:03:49.128 *********** 2025-06-02 17:43:13.846741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:43:13.846748 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:43:13.846760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-02 17:43:13.846767 | orchestrator | 2025-06-02 17:43:13.846774 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-02 17:43:13.846781 | orchestrator | Monday 02 June 2025 17:40:15 +0000 (0:00:01.843) 0:03:50.971 *********** 2025-06-02 17:43:13.846788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:43:13.846795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:43:13.846813 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-02 17:43:13.846830 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846837 | orchestrator | 2025-06-02 17:43:13.846843 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-02 17:43:13.846850 | orchestrator | Monday 02 June 2025 17:40:15 +0000 (0:00:00.444) 0:03:51.415 *********** 2025-06-02 17:43:13.846857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:43:13.846865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:43:13.846878 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-02 17:43:13.846904 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846911 | orchestrator | 2025-06-02 17:43:13.846917 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-02 17:43:13.846924 | orchestrator | Monday 02 June 2025 17:40:16 +0000 (0:00:00.619) 0:03:52.035 *********** 2025-06-02 17:43:13.846930 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846937 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846944 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846950 | orchestrator | 2025-06-02 17:43:13.846957 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-02 17:43:13.846963 | orchestrator | Monday 02 June 2025 17:40:16 +0000 (0:00:00.764) 0:03:52.800 *********** 2025-06-02 17:43:13.846970 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.846977 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.846983 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.846990 | orchestrator | 2025-06-02 17:43:13.846996 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-02 17:43:13.847003 | orchestrator | Monday 02 June 2025 17:40:18 +0000 (0:00:01.313) 0:03:54.113 *********** 2025-06-02 17:43:13.847010 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.847016 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.847023 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.847029 | orchestrator | 2025-06-02 17:43:13.847036 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-02 17:43:13.847047 | orchestrator | Monday 02 June 2025 17:40:18 +0000 (0:00:00.332) 0:03:54.446 *********** 2025-06-02 17:43:13.847054 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.847060 | orchestrator | 2025-06-02 17:43:13.847090 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-02 17:43:13.847120 | orchestrator | Monday 02 June 2025 17:40:19 +0000 (0:00:01.412) 0:03:55.858 *********** 2025-06-02 17:43:13.847128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:43:13.847149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:43:13.847201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:43:13.847287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.847423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.847580 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.847611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847638 | orchestrator | 2025-06-02 17:43:13.847645 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-02 17:43:13.847656 | orchestrator | Monday 02 June 2025 17:40:24 +0000 (0:00:04.260) 0:04:00.119 *********** 2025-06-02 17:43:13.847667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:43:13.847675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:43:13.847689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/t2025-06-02 17:43:13 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:13.847776 | orchestrator | 2025-06-02 17:43:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:13.847784 | orchestrator | imezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:43:13.847823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.847894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-02 17:43:13.847936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.847969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.847976 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.847997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.848004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.848289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.848325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.848351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.848382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-02 17:43:13.848411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.848419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-02 17:43:13.848432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-02 17:43:13.848463 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.848471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:43:13.848478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.848486 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.848493 | orchestrator | 2025-06-02 17:43:13.848501 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-02 17:43:13.848512 | orchestrator | Monday 02 June 2025 17:40:25 +0000 (0:00:01.532) 0:04:01.652 *********** 2025-06-02 17:43:13.848520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848535 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.848542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848557 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.848568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-02 17:43:13.848582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.848590 | orchestrator | 2025-06-02 17:43:13.848597 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-02 17:43:13.848605 | orchestrator | Monday 02 June 2025 17:40:27 +0000 (0:00:02.156) 0:04:03.808 *********** 2025-06-02 17:43:13.848612 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.848619 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.848626 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.848633 | orchestrator | 2025-06-02 17:43:13.848641 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-02 17:43:13.848648 | orchestrator | Monday 02 June 2025 17:40:29 +0000 (0:00:01.334) 0:04:05.142 *********** 2025-06-02 17:43:13.848655 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.848662 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.848669 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.848676 | orchestrator | 2025-06-02 17:43:13.848683 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-02 17:43:13.848690 | orchestrator | Monday 02 June 2025 17:40:31 +0000 (0:00:02.131) 0:04:07.274 *********** 2025-06-02 17:43:13.848698 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.848705 | orchestrator | 2025-06-02 17:43:13.848712 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-02 17:43:13.848719 | orchestrator | Monday 02 June 2025 17:40:32 +0000 (0:00:01.199) 0:04:08.473 *********** 2025-06-02 17:43:13.848731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.848746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.848754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.848761 | orchestrator | 2025-06-02 17:43:13.848772 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-02 17:43:13.848780 | orchestrator | Monday 02 June 2025 17:40:35 +0000 (0:00:03.373) 0:04:11.846 *********** 2025-06-02 17:43:13.848788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.848795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.848807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.848819 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.848827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.848834 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.848841 | orchestrator | 2025-06-02 17:43:13.848849 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-02 17:43:13.848857 | orchestrator | Monday 02 June 2025 17:40:36 +0000 (0:00:00.550) 0:04:12.397 *********** 2025-06-02 17:43:13.848865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848883 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.848891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848908 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.848916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-02 17:43:13.848933 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.848941 | orchestrator | 2025-06-02 17:43:13.848949 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-02 17:43:13.848957 | orchestrator | Monday 02 June 2025 17:40:37 +0000 (0:00:00.743) 0:04:13.140 *********** 2025-06-02 17:43:13.848965 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.848973 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.848980 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.848988 | orchestrator | 2025-06-02 17:43:13.848996 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-02 17:43:13.849005 | orchestrator | Monday 02 June 2025 17:40:38 +0000 (0:00:01.647) 0:04:14.788 *********** 2025-06-02 17:43:13.849013 | orchestrator | [0;33mchanged: [testbed-node-0] 2025-06-02 17:43:13.849021 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.849029 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.849037 | orchestrator | 2025-06-02 17:43:13.849044 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-02 17:43:13.849057 | orchestrator | Monday 02 June 2025 17:40:40 +0000 (0:00:02.071) 0:04:16.859 *********** 2025-06-02 17:43:13.849066 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.849074 | orchestrator | 2025-06-02 17:43:13.849082 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-02 17:43:13.849090 | orchestrator | Monday 02 June 2025 17:40:42 +0000 (0:00:01.246) 0:04:18.105 *********** 2025-06-02 17:43:13.849135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.849146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.849200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.849209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849225 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849255 | orchestrator | 2025-06-02 17:43:13.849262 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-02 17:43:13.849286 | orchestrator | Monday 02 June 2025 17:40:46 +0000 (0:00:04.763) 0:04:22.869 *********** 2025-06-02 17:43:13.849308 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.849316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849331 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.849343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.849351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849394 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.849407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.849415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.849430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.849438 | orchestrator | 2025-06-02 17:43:13.849445 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-02 17:43:13.849456 | orchestrator | Monday 02 June 2025 17:40:47 +0000 (0:00:01.001) 0:04:23.871 *********** 2025-06-02 17:43:13.849464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.849514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849540 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.849547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-02 17:43:13.849577 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.849584 | orchestrator | 2025-06-02 17:43:13.849591 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-02 17:43:13.849598 | orchestrator | Monday 02 June 2025 17:40:48 +0000 (0:00:00.922) 0:04:24.793 *********** 2025-06-02 17:43:13.849605 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.849612 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.849619 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.849626 | orchestrator | 2025-06-02 17:43:13.849634 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-02 17:43:13.849641 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:01.715) 0:04:26.509 *********** 2025-06-02 17:43:13.849648 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.849655 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.849662 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.849669 | orchestrator | 2025-06-02 17:43:13.849676 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-02 17:43:13.849688 | orchestrator | Monday 02 June 2025 17:40:52 +0000 (0:00:02.159) 0:04:28.668 *********** 2025-06-02 17:43:13.849695 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.849702 | orchestrator | 2025-06-02 17:43:13.849709 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-02 17:43:13.849717 | orchestrator | Monday 02 June 2025 17:40:54 +0000 (0:00:01.597) 0:04:30.266 *********** 2025-06-02 17:43:13.849724 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-02 17:43:13.849731 | orchestrator | 2025-06-02 17:43:13.849738 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-02 17:43:13.849745 | orchestrator | Monday 02 June 2025 17:40:55 +0000 (0:00:01.168) 0:04:31.434 *********** 2025-06-02 17:43:13.849756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:43:13.849765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:43:13.849772 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-02 17:43:13.849780 | orchestrator | 2025-06-02 17:43:13.849791 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-02 17:43:13.849798 | orchestrator | Monday 02 June 2025 17:40:59 +0000 (0:00:03.980) 0:04:35.415 *********** 2025-06-02 17:43:13.849806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.849813 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.849821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.849828 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.849836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.849851 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.849858 | orchestrator | 2025-06-02 17:43:13.849865 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-02 17:43:13.849872 | orchestrator | Monday 02 June 2025 17:41:01 +0000 (0:00:01.639) 0:04:37.054 *********** 2025-06-02 17:43:13.849880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849895 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.849906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849921 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.849928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-02 17:43:13.849943 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.849950 | orchestrator | 2025-06-02 17:43:13.849957 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:43:13.849965 | orchestrator | Monday 02 June 2025 17:41:03 +0000 (0:00:01.971) 0:04:39.025 *********** 2025-06-02 17:43:13.849972 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.849979 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.849986 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.849993 | orchestrator | 2025-06-02 17:43:13.850000 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:43:13.850007 | orchestrator | Monday 02 June 2025 17:41:05 +0000 (0:00:02.400) 0:04:41.425 *********** 2025-06-02 17:43:13.850049 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.850057 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.850064 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.850071 | orchestrator | 2025-06-02 17:43:13.850083 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-02 17:43:13.850090 | orchestrator | Monday 02 June 2025 17:41:08 +0000 (0:00:03.009) 0:04:44.435 *********** 2025-06-02 17:43:13.850143 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-02 17:43:13.850151 | orchestrator | 2025-06-02 17:43:13.850159 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-02 17:43:13.850172 | orchestrator | Monday 02 June 2025 17:41:09 +0000 (0:00:00.847) 0:04:45.283 *********** 2025-06-02 17:43:13.850191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850199 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850222 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850229 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850237 | orchestrator | 2025-06-02 17:43:13.850244 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-02 17:43:13.850252 | orchestrator | Monday 02 June 2025 17:41:10 +0000 (0:00:01.292) 0:04:46.575 *********** 2025-06-02 17:43:13.850263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850270 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850278 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850285 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-02 17:43:13.850305 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850313 | orchestrator | 2025-06-02 17:43:13.850324 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-02 17:43:13.850331 | orchestrator | Monday 02 June 2025 17:41:12 +0000 (0:00:01.669) 0:04:48.245 *********** 2025-06-02 17:43:13.850339 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850346 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850353 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850360 | orchestrator | 2025-06-02 17:43:13.850367 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:43:13.850374 | orchestrator | Monday 02 June 2025 17:41:13 +0000 (0:00:01.278) 0:04:49.523 *********** 2025-06-02 17:43:13.850382 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.850389 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.850396 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.850403 | orchestrator | 2025-06-02 17:43:13.850411 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:43:13.850418 | orchestrator | Monday 02 June 2025 17:41:15 +0000 (0:00:02.321) 0:04:51.844 *********** 2025-06-02 17:43:13.850425 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.850432 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.850439 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.850446 | orchestrator | 2025-06-02 17:43:13.850454 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-02 17:43:13.850461 | orchestrator | Monday 02 June 2025 17:41:19 +0000 (0:00:03.106) 0:04:54.951 *********** 2025-06-02 17:43:13.850468 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-02 17:43:13.850475 | orchestrator | 2025-06-02 17:43:13.850483 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-02 17:43:13.850490 | orchestrator | Monday 02 June 2025 17:41:20 +0000 (0:00:01.107) 0:04:56.059 *********** 2025-06-02 17:43:13.850497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850505 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850520 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850539 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850546 | orchestrator | 2025-06-02 17:43:13.850553 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-02 17:43:13.850565 | orchestrator | Monday 02 June 2025 17:41:21 +0000 (0:00:01.043) 0:04:57.102 *********** 2025-06-02 17:43:13.850573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850580 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850605 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-02 17:43:13.850620 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850627 | orchestrator | 2025-06-02 17:43:13.850635 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-02 17:43:13.850642 | orchestrator | Monday 02 June 2025 17:41:22 +0000 (0:00:01.403) 0:04:58.505 *********** 2025-06-02 17:43:13.850649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.850656 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.850663 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.850671 | orchestrator | 2025-06-02 17:43:13.850677 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-02 17:43:13.850684 | orchestrator | Monday 02 June 2025 17:41:24 +0000 (0:00:01.927) 0:05:00.432 *********** 2025-06-02 17:43:13.850691 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.850697 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.850704 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.850710 | orchestrator | 2025-06-02 17:43:13.850717 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-02 17:43:13.850724 | orchestrator | Monday 02 June 2025 17:41:26 +0000 (0:00:02.358) 0:05:02.791 *********** 2025-06-02 17:43:13.850730 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.850737 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.850744 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.850750 | orchestrator | 2025-06-02 17:43:13.850757 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-02 17:43:13.850763 | orchestrator | Monday 02 June 2025 17:41:30 +0000 (0:00:03.290) 0:05:06.081 *********** 2025-06-02 17:43:13.850770 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.850777 | orchestrator | 2025-06-02 17:43:13.850783 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-02 17:43:13.850790 | orchestrator | Monday 02 June 2025 17:41:31 +0000 (0:00:01.327) 0:05:07.408 *********** 2025-06-02 17:43:13.850800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.850812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.850820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.850846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.850862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.850870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.850895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.850902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.850934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.850953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.850959 | orchestrator | 2025-06-02 17:43:13.850966 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-02 17:43:13.850973 | orchestrator | Monday 02 June 2025 17:41:35 +0000 (0:00:03.807) 0:05:11.216 *********** 2025-06-02 17:43:13.850985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.850992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.850999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.851028 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.851047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.851054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.851079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.851116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 17:43:13.851127 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 17:43:13.851142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:43:13.851153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851160 | orchestrator | 2025-06-02 17:43:13.851166 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-02 17:43:13.851173 | orchestrator | Monday 02 June 2025 17:41:36 +0000 (0:00:00.773) 0:05:11.989 *********** 2025-06-02 17:43:13.851180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851194 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-02 17:43:13.851238 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851245 | orchestrator | 2025-06-02 17:43:13.851252 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-02 17:43:13.851258 | orchestrator | Monday 02 June 2025 17:41:36 +0000 (0:00:00.903) 0:05:12.892 *********** 2025-06-02 17:43:13.851265 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.851272 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.851278 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.851285 | orchestrator | 2025-06-02 17:43:13.851291 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-02 17:43:13.851298 | orchestrator | Monday 02 June 2025 17:41:38 +0000 (0:00:01.913) 0:05:14.806 *********** 2025-06-02 17:43:13.851305 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.851311 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.851318 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.851325 | orchestrator | 2025-06-02 17:43:13.851331 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-02 17:43:13.851338 | orchestrator | Monday 02 June 2025 17:41:41 +0000 (0:00:02.297) 0:05:17.103 *********** 2025-06-02 17:43:13.851344 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.851351 | orchestrator | 2025-06-02 17:43:13.851358 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-02 17:43:13.851364 | orchestrator | Monday 02 June 2025 17:41:42 +0000 (0:00:01.411) 0:05:18.515 *********** 2025-06-02 17:43:13.851375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:43:13.851388 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:43:13.851396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:43:13.851407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:43:13.851419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:43:13.851432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:43:13.851440 | orchestrator | 2025-06-02 17:43:13.851447 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-02 17:43:13.851454 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:05.541) 0:05:24.056 *********** 2025-06-02 17:43:13.851461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:43:13.851472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:43:13.851479 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851490 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:43:13.851502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:43:13.851509 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:43:13.851526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:43:13.851534 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851541 | orchestrator | 2025-06-02 17:43:13.851548 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-02 17:43:13.851554 | orchestrator | Monday 02 June 2025 17:41:49 +0000 (0:00:01.037) 0:05:25.094 *********** 2025-06-02 17:43:13.851561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:43:13.851568 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851592 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:43:13.851606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851620 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-02 17:43:13.851633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-02 17:43:13.851647 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851654 | orchestrator | 2025-06-02 17:43:13.851661 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-02 17:43:13.851668 | orchestrator | Monday 02 June 2025 17:41:50 +0000 (0:00:01.135) 0:05:26.230 *********** 2025-06-02 17:43:13.851674 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851681 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851687 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851694 | orchestrator | 2025-06-02 17:43:13.851700 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-02 17:43:13.851707 | orchestrator | Monday 02 June 2025 17:41:50 +0000 (0:00:00.462) 0:05:26.693 *********** 2025-06-02 17:43:13.851714 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.851720 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.851727 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.851733 | orchestrator | 2025-06-02 17:43:13.851740 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-02 17:43:13.851746 | orchestrator | Monday 02 June 2025 17:41:52 +0000 (0:00:01.428) 0:05:28.122 *********** 2025-06-02 17:43:13.851753 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.851760 | orchestrator | 2025-06-02 17:43:13.851766 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-02 17:43:13.851777 | orchestrator | Monday 02 June 2025 17:41:53 +0000 (0:00:01.772) 0:05:29.894 *********** 2025-06-02 17:43:13.851788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:43:13.851799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.851811 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.851833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:43:13.851840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.851851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.851881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:43:13.851888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.851895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.851924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:43:13.851936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.851944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.851968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:43:13.851980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.851991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.851999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:43:13.852027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.852035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852060 | orchestrator | 2025-06-02 17:43:13.852066 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-02 17:43:13.852073 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:04.483) 0:05:34.378 *********** 2025-06-02 17:43:13.852080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:43:13.852087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.852136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:43:13.852175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.852182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852210 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:43:13.852224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.852276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:43:13.852316 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.852323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852333 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852348 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:43:13.852366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:43:13.852376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:43:13.852409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-02 17:43:13.852421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852428 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:43:13.852438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:43:13.852445 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852452 | orchestrator | 2025-06-02 17:43:13.852458 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-02 17:43:13.852465 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:01.733) 0:05:36.111 *********** 2025-06-02 17:43:13.852472 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852543 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-02 17:43:13.852564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-02 17:43:13.852577 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852584 | orchestrator | 2025-06-02 17:43:13.852590 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-02 17:43:13.852597 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:01.140) 0:05:37.251 *********** 2025-06-02 17:43:13.852604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852627 | orchestrator | 2025-06-02 17:43:13.852633 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-02 17:43:13.852640 | orchestrator | Monday 02 June 2025 17:42:01 +0000 (0:00:00.461) 0:05:37.713 *********** 2025-06-02 17:43:13.852647 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852653 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852660 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852666 | orchestrator | 2025-06-02 17:43:13.852673 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-02 17:43:13.852680 | orchestrator | Monday 02 June 2025 17:42:03 +0000 (0:00:01.777) 0:05:39.491 *********** 2025-06-02 17:43:13.852687 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.852693 | orchestrator | 2025-06-02 17:43:13.852699 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-02 17:43:13.852705 | orchestrator | Monday 02 June 2025 17:42:05 +0000 (0:00:01.714) 0:05:41.206 *********** 2025-06-02 17:43:13.852714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:43:13.852726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:43:13.852733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-02 17:43:13.852740 | orchestrator | 2025-06-02 17:43:13.852746 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-02 17:43:13.852753 | orchestrator | Monday 02 June 2025 17:42:07 +0000 (0:00:02.622) 0:05:43.828 *********** 2025-06-02 17:43:13.852762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:43:13.852769 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:43:13.852790 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-02 17:43:13.852803 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852809 | orchestrator | 2025-06-02 17:43:13.852815 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-02 17:43:13.852821 | orchestrator | Monday 02 June 2025 17:42:08 +0000 (0:00:00.410) 0:05:44.238 *********** 2025-06-02 17:43:13.852827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:43:13.852834 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:43:13.852846 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-02 17:43:13.852859 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852865 | orchestrator | 2025-06-02 17:43:13.852871 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-02 17:43:13.852877 | orchestrator | Monday 02 June 2025 17:42:09 +0000 (0:00:01.013) 0:05:45.252 *********** 2025-06-02 17:43:13.852883 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852889 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852895 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852901 | orchestrator | 2025-06-02 17:43:13.852907 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-02 17:43:13.852914 | orchestrator | Monday 02 June 2025 17:42:09 +0000 (0:00:00.519) 0:05:45.771 *********** 2025-06-02 17:43:13.852920 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.852926 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.852932 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.852938 | orchestrator | 2025-06-02 17:43:13.852944 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-02 17:43:13.852950 | orchestrator | Monday 02 June 2025 17:42:11 +0000 (0:00:01.415) 0:05:47.186 *********** 2025-06-02 17:43:13.852957 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:43:13.852963 | orchestrator | 2025-06-02 17:43:13.852969 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-02 17:43:13.852980 | orchestrator | Monday 02 June 2025 17:42:13 +0000 (0:00:01.869) 0:05:49.056 *********** 2025-06-02 17:43:13.852986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.852997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.853060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.853075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.853086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.853127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-02 17:43:13.853134 | orchestrator | 2025-06-02 17:43:13.853141 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-02 17:43:13.853147 | orchestrator | Monday 02 June 2025 17:42:19 +0000 (0:00:06.448) 0:05:55.504 *********** 2025-06-02 17:43:13.853154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853167 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-02 17:43:13.853216 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853223 | orchestrator | 2025-06-02 17:43:13.853229 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-02 17:43:13.853235 | orchestrator | Monday 02 June 2025 17:42:20 +0000 (0:00:00.620) 0:05:56.125 *********** 2025-06-02 17:43:13.853246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853275 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853306 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-02 17:43:13.853363 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853373 | orchestrator | 2025-06-02 17:43:13.853382 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-02 17:43:13.853392 | orchestrator | Monday 02 June 2025 17:42:21 +0000 (0:00:01.739) 0:05:57.865 *********** 2025-06-02 17:43:13.853402 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.853413 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.853421 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.853427 | orchestrator | 2025-06-02 17:43:13.853433 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-02 17:43:13.853439 | orchestrator | Monday 02 June 2025 17:42:23 +0000 (0:00:01.337) 0:05:59.203 *********** 2025-06-02 17:43:13.853445 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.853451 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.853462 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.853468 | orchestrator | 2025-06-02 17:43:13.853474 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-02 17:43:13.853480 | orchestrator | Monday 02 June 2025 17:42:25 +0000 (0:00:02.248) 0:06:01.451 *********** 2025-06-02 17:43:13.853487 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853493 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853499 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853505 | orchestrator | 2025-06-02 17:43:13.853511 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-02 17:43:13.853517 | orchestrator | Monday 02 June 2025 17:42:25 +0000 (0:00:00.356) 0:06:01.808 *********** 2025-06-02 17:43:13.853523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853529 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853535 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853541 | orchestrator | 2025-06-02 17:43:13.853547 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-02 17:43:13.853553 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.328) 0:06:02.137 *********** 2025-06-02 17:43:13.853559 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853572 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853578 | orchestrator | 2025-06-02 17:43:13.853585 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-02 17:43:13.853596 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.675) 0:06:02.813 *********** 2025-06-02 17:43:13.853605 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853615 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853629 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853636 | orchestrator | 2025-06-02 17:43:13.853642 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-02 17:43:13.853648 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:00.335) 0:06:03.148 *********** 2025-06-02 17:43:13.853654 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853660 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853666 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853672 | orchestrator | 2025-06-02 17:43:13.853678 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-02 17:43:13.853685 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:00.355) 0:06:03.503 *********** 2025-06-02 17:43:13.853691 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.853697 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.853703 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.853709 | orchestrator | 2025-06-02 17:43:13.853715 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-02 17:43:13.853721 | orchestrator | Monday 02 June 2025 17:42:28 +0000 (0:00:00.933) 0:06:04.437 *********** 2025-06-02 17:43:13.853727 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853733 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853739 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853746 | orchestrator | 2025-06-02 17:43:13.853752 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-02 17:43:13.853758 | orchestrator | Monday 02 June 2025 17:42:29 +0000 (0:00:00.694) 0:06:05.132 *********** 2025-06-02 17:43:13.853764 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853770 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853776 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853782 | orchestrator | 2025-06-02 17:43:13.853788 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-02 17:43:13.853794 | orchestrator | Monday 02 June 2025 17:42:29 +0000 (0:00:00.400) 0:06:05.532 *********** 2025-06-02 17:43:13.853800 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853806 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853812 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853823 | orchestrator | 2025-06-02 17:43:13.853829 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-02 17:43:13.853835 | orchestrator | Monday 02 June 2025 17:42:30 +0000 (0:00:00.891) 0:06:06.423 *********** 2025-06-02 17:43:13.853841 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853847 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853857 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853863 | orchestrator | 2025-06-02 17:43:13.853869 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-02 17:43:13.853875 | orchestrator | Monday 02 June 2025 17:42:31 +0000 (0:00:01.327) 0:06:07.751 *********** 2025-06-02 17:43:13.853882 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853888 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853894 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853900 | orchestrator | 2025-06-02 17:43:13.853906 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-02 17:43:13.853912 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:00.869) 0:06:08.621 *********** 2025-06-02 17:43:13.853919 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.853925 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.853931 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.853937 | orchestrator | 2025-06-02 17:43:13.853943 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-02 17:43:13.853949 | orchestrator | Monday 02 June 2025 17:42:42 +0000 (0:00:10.113) 0:06:18.734 *********** 2025-06-02 17:43:13.853955 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.853962 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.853968 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.853974 | orchestrator | 2025-06-02 17:43:13.853980 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-02 17:43:13.853986 | orchestrator | Monday 02 June 2025 17:42:43 +0000 (0:00:00.749) 0:06:19.484 *********** 2025-06-02 17:43:13.853992 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.853998 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.854005 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.854011 | orchestrator | 2025-06-02 17:43:13.854040 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-02 17:43:13.854046 | orchestrator | Monday 02 June 2025 17:42:56 +0000 (0:00:13.293) 0:06:32.778 *********** 2025-06-02 17:43:13.854052 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.854058 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.854064 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.854070 | orchestrator | 2025-06-02 17:43:13.854076 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-02 17:43:13.854083 | orchestrator | Monday 02 June 2025 17:42:57 +0000 (0:00:00.782) 0:06:33.560 *********** 2025-06-02 17:43:13.854089 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:43:13.854110 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:43:13.854117 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:43:13.854123 | orchestrator | 2025-06-02 17:43:13.854129 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-02 17:43:13.854135 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:04.534) 0:06:38.095 *********** 2025-06-02 17:43:13.854141 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854147 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854159 | orchestrator | 2025-06-02 17:43:13.854166 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-02 17:43:13.854172 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:00.367) 0:06:38.463 *********** 2025-06-02 17:43:13.854178 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854184 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854190 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854196 | orchestrator | 2025-06-02 17:43:13.854202 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-02 17:43:13.854214 | orchestrator | Monday 02 June 2025 17:43:03 +0000 (0:00:00.722) 0:06:39.185 *********** 2025-06-02 17:43:13.854220 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854226 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854232 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854238 | orchestrator | 2025-06-02 17:43:13.854248 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-02 17:43:13.854255 | orchestrator | Monday 02 June 2025 17:43:03 +0000 (0:00:00.348) 0:06:39.533 *********** 2025-06-02 17:43:13.854261 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854267 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854273 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854279 | orchestrator | 2025-06-02 17:43:13.854285 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-02 17:43:13.854291 | orchestrator | Monday 02 June 2025 17:43:03 +0000 (0:00:00.348) 0:06:39.882 *********** 2025-06-02 17:43:13.854297 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854304 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854310 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854316 | orchestrator | 2025-06-02 17:43:13.854322 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-02 17:43:13.854328 | orchestrator | Monday 02 June 2025 17:43:04 +0000 (0:00:00.368) 0:06:40.251 *********** 2025-06-02 17:43:13.854334 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:43:13.854340 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:43:13.854346 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:43:13.854352 | orchestrator | 2025-06-02 17:43:13.854359 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-02 17:43:13.854365 | orchestrator | Monday 02 June 2025 17:43:05 +0000 (0:00:00.693) 0:06:40.944 *********** 2025-06-02 17:43:13.854371 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.854377 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.854383 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.854389 | orchestrator | 2025-06-02 17:43:13.854395 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-02 17:43:13.854402 | orchestrator | Monday 02 June 2025 17:43:09 +0000 (0:00:04.432) 0:06:45.377 *********** 2025-06-02 17:43:13.854408 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:43:13.854414 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:43:13.854420 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:43:13.854426 | orchestrator | 2025-06-02 17:43:13.854432 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:43:13.854439 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:43:13.854449 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:43:13.854455 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-02 17:43:13.854461 | orchestrator | 2025-06-02 17:43:13.854467 | orchestrator | 2025-06-02 17:43:13.854473 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:43:13.854479 | orchestrator | Monday 02 June 2025 17:43:10 +0000 (0:00:00.854) 0:06:46.232 *********** 2025-06-02 17:43:13.854486 | orchestrator | =============================================================================== 2025-06-02 17:43:13.854492 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.29s 2025-06-02 17:43:13.854498 | orchestrator | loadbalancer : Start backup haproxy container -------------------------- 10.11s 2025-06-02 17:43:13.854504 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 6.52s 2025-06-02 17:43:13.854510 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.45s 2025-06-02 17:43:13.854520 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.12s 2025-06-02 17:43:13.854527 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.66s 2025-06-02 17:43:13.854532 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.54s 2025-06-02 17:43:13.854538 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 5.21s 2025-06-02 17:43:13.854544 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 4.93s 2025-06-02 17:43:13.854550 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.90s 2025-06-02 17:43:13.854557 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.76s 2025-06-02 17:43:13.854563 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.53s 2025-06-02 17:43:13.854569 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.48s 2025-06-02 17:43:13.854575 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.43s 2025-06-02 17:43:13.854581 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 4.28s 2025-06-02 17:43:13.854587 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.26s 2025-06-02 17:43:13.854593 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.11s 2025-06-02 17:43:13.854599 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.03s 2025-06-02 17:43:13.854605 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.98s 2025-06-02 17:43:13.854611 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.93s 2025-06-02 17:43:16.888247 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:16.888534 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:16.889159 | orchestrator | 2025-06-02 17:43:16 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:16.889208 | orchestrator | 2025-06-02 17:43:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:19.937083 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:19.940358 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:19.941469 | orchestrator | 2025-06-02 17:43:19 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:19.941512 | orchestrator | 2025-06-02 17:43:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:22.989428 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:22.989867 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:22.990729 | orchestrator | 2025-06-02 17:43:22 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:22.996656 | orchestrator | 2025-06-02 17:43:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:26.041577 | orchestrator | 2025-06-02 17:43:26 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:26.042964 | orchestrator | 2025-06-02 17:43:26 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:26.044044 | orchestrator | 2025-06-02 17:43:26 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:26.044095 | orchestrator | 2025-06-02 17:43:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:29.083650 | orchestrator | 2025-06-02 17:43:29 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:29.084333 | orchestrator | 2025-06-02 17:43:29 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:29.085588 | orchestrator | 2025-06-02 17:43:29 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:29.085646 | orchestrator | 2025-06-02 17:43:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:32.133821 | orchestrator | 2025-06-02 17:43:32 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:32.135858 | orchestrator | 2025-06-02 17:43:32 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:32.139884 | orchestrator | 2025-06-02 17:43:32 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:32.139935 | orchestrator | 2025-06-02 17:43:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:35.200895 | orchestrator | 2025-06-02 17:43:35 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:35.206533 | orchestrator | 2025-06-02 17:43:35 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:35.208404 | orchestrator | 2025-06-02 17:43:35 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:35.208464 | orchestrator | 2025-06-02 17:43:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:38.234616 | orchestrator | 2025-06-02 17:43:38 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:38.235339 | orchestrator | 2025-06-02 17:43:38 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:38.240259 | orchestrator | 2025-06-02 17:43:38 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:38.240321 | orchestrator | 2025-06-02 17:43:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:41.301442 | orchestrator | 2025-06-02 17:43:41 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:41.306121 | orchestrator | 2025-06-02 17:43:41 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:41.310704 | orchestrator | 2025-06-02 17:43:41 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:41.311783 | orchestrator | 2025-06-02 17:43:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:44.409964 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:44.410258 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:44.410285 | orchestrator | 2025-06-02 17:43:44 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:44.410310 | orchestrator | 2025-06-02 17:43:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:47.450917 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:47.451574 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:47.453693 | orchestrator | 2025-06-02 17:43:47 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:47.454235 | orchestrator | 2025-06-02 17:43:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:50.499947 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:50.501160 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:50.502526 | orchestrator | 2025-06-02 17:43:50 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:50.502571 | orchestrator | 2025-06-02 17:43:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:53.541437 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:53.543779 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:53.545559 | orchestrator | 2025-06-02 17:43:53 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:53.545584 | orchestrator | 2025-06-02 17:43:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:56.598171 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:56.599648 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:56.602198 | orchestrator | 2025-06-02 17:43:56 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:56.602257 | orchestrator | 2025-06-02 17:43:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:43:59.648219 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:43:59.650592 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:43:59.653343 | orchestrator | 2025-06-02 17:43:59 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:43:59.653582 | orchestrator | 2025-06-02 17:43:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:02.699150 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:02.700524 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:02.700779 | orchestrator | 2025-06-02 17:44:02 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:02.703732 | orchestrator | 2025-06-02 17:44:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:05.749865 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:05.750216 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:05.751975 | orchestrator | 2025-06-02 17:44:05 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:05.752031 | orchestrator | 2025-06-02 17:44:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:08.799425 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:08.801696 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:08.804759 | orchestrator | 2025-06-02 17:44:08 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:08.805620 | orchestrator | 2025-06-02 17:44:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:11.856385 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:11.857524 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:11.858675 | orchestrator | 2025-06-02 17:44:11 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:11.858730 | orchestrator | 2025-06-02 17:44:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:14.908644 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:14.910940 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:14.912685 | orchestrator | 2025-06-02 17:44:14 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:14.912715 | orchestrator | 2025-06-02 17:44:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:17.965870 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:17.968199 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:17.970827 | orchestrator | 2025-06-02 17:44:17 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:17.970868 | orchestrator | 2025-06-02 17:44:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:21.030173 | orchestrator | 2025-06-02 17:44:21 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:21.031333 | orchestrator | 2025-06-02 17:44:21 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:21.032579 | orchestrator | 2025-06-02 17:44:21 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:21.032640 | orchestrator | 2025-06-02 17:44:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:24.076395 | orchestrator | 2025-06-02 17:44:24 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:24.078335 | orchestrator | 2025-06-02 17:44:24 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:24.079853 | orchestrator | 2025-06-02 17:44:24 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:24.079895 | orchestrator | 2025-06-02 17:44:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:27.122365 | orchestrator | 2025-06-02 17:44:27 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:27.124783 | orchestrator | 2025-06-02 17:44:27 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:27.127405 | orchestrator | 2025-06-02 17:44:27 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:27.127450 | orchestrator | 2025-06-02 17:44:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:30.181194 | orchestrator | 2025-06-02 17:44:30 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:30.182265 | orchestrator | 2025-06-02 17:44:30 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:30.184324 | orchestrator | 2025-06-02 17:44:30 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:30.184361 | orchestrator | 2025-06-02 17:44:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:33.243883 | orchestrator | 2025-06-02 17:44:33 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:33.246833 | orchestrator | 2025-06-02 17:44:33 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:33.249297 | orchestrator | 2025-06-02 17:44:33 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:33.249425 | orchestrator | 2025-06-02 17:44:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:36.305019 | orchestrator | 2025-06-02 17:44:36 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:36.308844 | orchestrator | 2025-06-02 17:44:36 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:36.308923 | orchestrator | 2025-06-02 17:44:36 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:36.309370 | orchestrator | 2025-06-02 17:44:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:39.362328 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:39.363138 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:39.364436 | orchestrator | 2025-06-02 17:44:39 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:39.364510 | orchestrator | 2025-06-02 17:44:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:42.430692 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:42.433311 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:42.435356 | orchestrator | 2025-06-02 17:44:42 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:42.435484 | orchestrator | 2025-06-02 17:44:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:45.509903 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:45.511617 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:45.513081 | orchestrator | 2025-06-02 17:44:45 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:45.513124 | orchestrator | 2025-06-02 17:44:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:48.569599 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:48.572685 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:48.575151 | orchestrator | 2025-06-02 17:44:48 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:48.575207 | orchestrator | 2025-06-02 17:44:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:51.621986 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:51.624583 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:51.628322 | orchestrator | 2025-06-02 17:44:51 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:51.628386 | orchestrator | 2025-06-02 17:44:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:54.677664 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:54.679436 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:54.681986 | orchestrator | 2025-06-02 17:44:54 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:54.682141 | orchestrator | 2025-06-02 17:44:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:44:57.730993 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:44:57.733256 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:44:57.734634 | orchestrator | 2025-06-02 17:44:57 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:44:57.734713 | orchestrator | 2025-06-02 17:44:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:00.782991 | orchestrator | 2025-06-02 17:45:00 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:45:00.783790 | orchestrator | 2025-06-02 17:45:00 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:00.785255 | orchestrator | 2025-06-02 17:45:00 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:00.785327 | orchestrator | 2025-06-02 17:45:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:03.823246 | orchestrator | 2025-06-02 17:45:03 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:45:03.824285 | orchestrator | 2025-06-02 17:45:03 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:03.825551 | orchestrator | 2025-06-02 17:45:03 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:03.825584 | orchestrator | 2025-06-02 17:45:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:06.869683 | orchestrator | 2025-06-02 17:45:06 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:45:06.871190 | orchestrator | 2025-06-02 17:45:06 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:06.872393 | orchestrator | 2025-06-02 17:45:06 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:06.872466 | orchestrator | 2025-06-02 17:45:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:09.920140 | orchestrator | 2025-06-02 17:45:09 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:45:09.922429 | orchestrator | 2025-06-02 17:45:09 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:09.925427 | orchestrator | 2025-06-02 17:45:09 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:09.925498 | orchestrator | 2025-06-02 17:45:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:12.962386 | orchestrator | 2025-06-02 17:45:12 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state STARTED 2025-06-02 17:45:12.964471 | orchestrator | 2025-06-02 17:45:12 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:12.964531 | orchestrator | 2025-06-02 17:45:12 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:12.964542 | orchestrator | 2025-06-02 17:45:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:16.016384 | orchestrator | 2025-06-02 17:45:16 | INFO  | Task dee94b30-b977-43d6-9857-6da326336af3 is in state SUCCESS 2025-06-02 17:45:16.020408 | orchestrator | 2025-06-02 17:45:16.020470 | orchestrator | 2025-06-02 17:45:16.020477 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-02 17:45:16.020483 | orchestrator | 2025-06-02 17:45:16.020489 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 17:45:16.020494 | orchestrator | Monday 02 June 2025 17:33:28 +0000 (0:00:01.036) 0:00:01.036 *********** 2025-06-02 17:45:16.020517 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.020522 | orchestrator | 2025-06-02 17:45:16.020527 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 17:45:16.020531 | orchestrator | Monday 02 June 2025 17:33:29 +0000 (0:00:01.146) 0:00:02.183 *********** 2025-06-02 17:45:16.020535 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020540 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020544 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020548 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020552 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020556 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020560 | orchestrator | 2025-06-02 17:45:16.020565 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 17:45:16.020569 | orchestrator | Monday 02 June 2025 17:33:30 +0000 (0:00:01.644) 0:00:03.828 *********** 2025-06-02 17:45:16.020573 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020576 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020580 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020584 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020588 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020592 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020596 | orchestrator | 2025-06-02 17:45:16.020600 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 17:45:16.020604 | orchestrator | Monday 02 June 2025 17:33:31 +0000 (0:00:00.998) 0:00:04.826 *********** 2025-06-02 17:45:16.020608 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020611 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020615 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020619 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020623 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020627 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020631 | orchestrator | 2025-06-02 17:45:16.020635 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 17:45:16.020639 | orchestrator | Monday 02 June 2025 17:33:32 +0000 (0:00:01.172) 0:00:05.999 *********** 2025-06-02 17:45:16.020643 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020647 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020651 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020655 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020659 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020663 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020666 | orchestrator | 2025-06-02 17:45:16.020670 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 17:45:16.020675 | orchestrator | Monday 02 June 2025 17:33:33 +0000 (0:00:00.809) 0:00:06.808 *********** 2025-06-02 17:45:16.020679 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020682 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020686 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020690 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020694 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020698 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020702 | orchestrator | 2025-06-02 17:45:16.020706 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 17:45:16.020710 | orchestrator | Monday 02 June 2025 17:33:34 +0000 (0:00:00.508) 0:00:07.317 *********** 2025-06-02 17:45:16.020714 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020718 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020722 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020726 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020729 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020733 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020737 | orchestrator | 2025-06-02 17:45:16.020741 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 17:45:16.020750 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:00.822) 0:00:08.140 *********** 2025-06-02 17:45:16.020754 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.020759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.020763 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.020767 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.020770 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.020774 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.020778 | orchestrator | 2025-06-02 17:45:16.020792 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 17:45:16.020797 | orchestrator | Monday 02 June 2025 17:33:35 +0000 (0:00:00.703) 0:00:08.843 *********** 2025-06-02 17:45:16.020801 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020805 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020809 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020813 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020817 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020821 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020825 | orchestrator | 2025-06-02 17:45:16.020829 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 17:45:16.020833 | orchestrator | Monday 02 June 2025 17:33:36 +0000 (0:00:00.806) 0:00:09.649 *********** 2025-06-02 17:45:16.020837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.020841 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.020845 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.020849 | orchestrator | 2025-06-02 17:45:16.020937 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 17:45:16.020942 | orchestrator | Monday 02 June 2025 17:33:37 +0000 (0:00:00.795) 0:00:10.445 *********** 2025-06-02 17:45:16.020946 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.020950 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.020953 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.020957 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.020961 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.020965 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.020969 | orchestrator | 2025-06-02 17:45:16.020985 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 17:45:16.020990 | orchestrator | Monday 02 June 2025 17:33:38 +0000 (0:00:01.412) 0:00:11.857 *********** 2025-06-02 17:45:16.020994 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.020999 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.021004 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.021009 | orchestrator | 2025-06-02 17:45:16.021014 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 17:45:16.021018 | orchestrator | Monday 02 June 2025 17:33:42 +0000 (0:00:03.184) 0:00:15.042 *********** 2025-06-02 17:45:16.021024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.021029 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.021033 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.021038 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021042 | orchestrator | 2025-06-02 17:45:16.021047 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 17:45:16.021116 | orchestrator | Monday 02 June 2025 17:33:43 +0000 (0:00:01.334) 0:00:16.379 *********** 2025-06-02 17:45:16.021124 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021130 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021140 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021145 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021150 | orchestrator | 2025-06-02 17:45:16.021155 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 17:45:16.021160 | orchestrator | Monday 02 June 2025 17:33:44 +0000 (0:00:01.099) 0:00:17.479 *********** 2025-06-02 17:45:16.021166 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021173 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021181 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021186 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021190 | orchestrator | 2025-06-02 17:45:16.021193 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 17:45:16.021204 | orchestrator | Monday 02 June 2025 17:33:45 +0000 (0:00:00.745) 0:00:18.224 *********** 2025-06-02 17:45:16.021210 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 17:33:39.529182', 'end': '2025-06-02 17:33:39.779116', 'delta': '0:00:00.249934', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021221 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 17:33:40.570373', 'end': '2025-06-02 17:33:40.835120', 'delta': '0:00:00.264747', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021226 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 17:33:41.438341', 'end': '2025-06-02 17:33:41.754559', 'delta': '0:00:00.316218', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.021233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021237 | orchestrator | 2025-06-02 17:45:16.021241 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 17:45:16.021245 | orchestrator | Monday 02 June 2025 17:33:45 +0000 (0:00:00.458) 0:00:18.682 *********** 2025-06-02 17:45:16.021248 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.021252 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.021256 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.021260 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.021264 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.021269 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.021275 | orchestrator | 2025-06-02 17:45:16.021281 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 17:45:16.021287 | orchestrator | Monday 02 June 2025 17:33:48 +0000 (0:00:02.732) 0:00:21.414 *********** 2025-06-02 17:45:16.021293 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.021300 | orchestrator | 2025-06-02 17:45:16.021306 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 17:45:16.021312 | orchestrator | Monday 02 June 2025 17:33:49 +0000 (0:00:00.993) 0:00:22.407 *********** 2025-06-02 17:45:16.021317 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021325 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021334 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021342 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021349 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021354 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021360 | orchestrator | 2025-06-02 17:45:16.021367 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 17:45:16.021373 | orchestrator | Monday 02 June 2025 17:33:51 +0000 (0:00:01.851) 0:00:24.259 *********** 2025-06-02 17:45:16.021379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021385 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021391 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021398 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021405 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021411 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021419 | orchestrator | 2025-06-02 17:45:16.021423 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:45:16.021427 | orchestrator | Monday 02 June 2025 17:33:53 +0000 (0:00:01.998) 0:00:26.258 *********** 2025-06-02 17:45:16.021434 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021438 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021446 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021450 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021454 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021457 | orchestrator | 2025-06-02 17:45:16.021461 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 17:45:16.021465 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:01.130) 0:00:27.388 *********** 2025-06-02 17:45:16.021469 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021473 | orchestrator | 2025-06-02 17:45:16.021477 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 17:45:16.021481 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:00.174) 0:00:27.563 *********** 2025-06-02 17:45:16.021489 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021493 | orchestrator | 2025-06-02 17:45:16.021497 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:45:16.021501 | orchestrator | Monday 02 June 2025 17:33:54 +0000 (0:00:00.256) 0:00:27.820 *********** 2025-06-02 17:45:16.021504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021508 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021516 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021520 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021524 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021528 | orchestrator | 2025-06-02 17:45:16.021531 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 17:45:16.021539 | orchestrator | Monday 02 June 2025 17:33:56 +0000 (0:00:01.832) 0:00:29.652 *********** 2025-06-02 17:45:16.021543 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021568 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021573 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021577 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021580 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021584 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021588 | orchestrator | 2025-06-02 17:45:16.021592 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 17:45:16.021596 | orchestrator | Monday 02 June 2025 17:33:58 +0000 (0:00:01.937) 0:00:31.590 *********** 2025-06-02 17:45:16.021600 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021604 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021607 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021611 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021615 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021619 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021623 | orchestrator | 2025-06-02 17:45:16.021627 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 17:45:16.021631 | orchestrator | Monday 02 June 2025 17:33:59 +0000 (0:00:01.238) 0:00:32.828 *********** 2025-06-02 17:45:16.021634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021638 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021642 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021646 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021650 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021654 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021657 | orchestrator | 2025-06-02 17:45:16.021661 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 17:45:16.021665 | orchestrator | Monday 02 June 2025 17:34:01 +0000 (0:00:01.378) 0:00:34.207 *********** 2025-06-02 17:45:16.021669 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021673 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021677 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021720 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021724 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021728 | orchestrator | 2025-06-02 17:45:16.021731 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 17:45:16.021735 | orchestrator | Monday 02 June 2025 17:34:02 +0000 (0:00:01.053) 0:00:35.261 *********** 2025-06-02 17:45:16.021756 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021760 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021764 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021772 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021776 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021780 | orchestrator | 2025-06-02 17:45:16.021784 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 17:45:16.021791 | orchestrator | Monday 02 June 2025 17:34:03 +0000 (0:00:01.732) 0:00:36.994 *********** 2025-06-02 17:45:16.021795 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021799 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.021803 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.021807 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.021810 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.021814 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.021818 | orchestrator | 2025-06-02 17:45:16.021822 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 17:45:16.021826 | orchestrator | Monday 02 June 2025 17:34:04 +0000 (0:00:00.866) 0:00:37.860 *********** 2025-06-02 17:45:16.021831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021868 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.021908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.021915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021954 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021963 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.021972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.021976 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.021980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.021996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.022003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.022007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.022011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.022048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.022052 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.022060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part1', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part14', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part15', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part16', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023176 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534', 'dm-uuid-LVM-C1PeLgF1SxuUfh3ynRcRKoj564FyEqEhCHhSqiIiYbxftGB6XqSANuIyMw54bdoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023218 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.023230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e', 'dm-uuid-LVM-pFJq6nbtSqDHxlWYzG8pS3VeXlxNepxxO2BGKsksEHWXQF2TkE1j1GjykyBHupHO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7', 'dm-uuid-LVM-fHoNCxtRreMFFTWOPBe2ysAAlEBwyI3gFg84Qx1fAvx2XHSc65dIcB3OudZopEIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84', 'dm-uuid-LVM-9xcVI4TBNfIyK6jFKjrZCWdl0mksa54asOizRAQetCkX2NpAhYr96uEe6IeSNSZ9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023288 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023313 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023322 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023353 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023360 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023378 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023386 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023419 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L0Uoew-tdG5-5o2e-uK3H-Tk0g-iUQ0-9OmC0S', 'scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3', 'scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ePsnht-YeWJ-Lf9E-hAE9-dAcD-3nfo-eUnWxm', 'scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62', 'scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023445 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4', 'scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dtLcbm-BvrF-poUw-P8wK-mlch-Xot4-XRgIij', 'scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a', 'scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023609 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDcTuA-YoxY-RB14-ZrH1-jOQP-Bnc2-CbHAFd', 'scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140', 'scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023621 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.023632 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f', 'scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023645 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023657 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.023673 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc', 'dm-uuid-LVM-ytups1pI5RQScR8es6EC2ehzveRarGHlbFqc4V4MjMzJo3TlgtjjYi6IsQ2GV1XY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14', 'dm-uuid-LVM-41xUQUmZVztKsWiHhnpwo6xNJtTVNfNAFjLeRlfZIUjvsJzby2C0fsQozgJh83BM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023727 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023753 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:45:16.023805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CAozE0-JMkL-sS2s-sKDL-CQKZ-VNnx-KvTVaZ', 'scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d', 'scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pucggk-7A71-e7n9-I93l-XDiI-evfo-q9vyJA', 'scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd', 'scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023837 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6', 'scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:45:16.023863 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.023871 | orchestrator | 2025-06-02 17:45:16.023878 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 17:45:16.023885 | orchestrator | Monday 02 June 2025 17:34:07 +0000 (0:00:03.085) 0:00:40.946 *********** 2025-06-02 17:45:16.023941 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.023951 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.023958 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.023965 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.023976 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.023992 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024007 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024014 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024026 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part1', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part14', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part15', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part16', 'scsi-SQEMU_QEMU_HARDDISK_c0f8b339-eb3b-4bc4-a7f0-e33af1d9cfa3-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024043 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-45-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024052 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024059 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024066 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024073 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024084 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024097 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024104 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.024118 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024125 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024136 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part1', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part14', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part15', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part16', 'scsi-SQEMU_QEMU_HARDDISK_2efc9266-ddfc-4e29-8616-f47e0c5d606f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024150 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024163 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024170 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024184 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024191 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024207 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024215 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.024226 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024244 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part1', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part14', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part15', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part16', 'scsi-SQEMU_QEMU_HARDDISK_8697a44b-eed5-41d0-9c8d-10255323f65d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024257 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-39-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024264 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.024277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534', 'dm-uuid-LVM-C1PeLgF1SxuUfh3ynRcRKoj564FyEqEhCHhSqiIiYbxftGB6XqSANuIyMw54bdoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024285 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e', 'dm-uuid-LVM-pFJq6nbtSqDHxlWYzG8pS3VeXlxNepxxO2BGKsksEHWXQF2TkE1j1GjykyBHupHO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024335 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024343 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024351 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024358 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024372 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7', 'dm-uuid-LVM-fHoNCxtRreMFFTWOPBe2ysAAlEBwyI3gFg84Qx1fAvx2XHSc65dIcB3OudZopEIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024389 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024398 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84', 'dm-uuid-LVM-9xcVI4TBNfIyK6jFKjrZCWdl0mksa54asOizRAQetCkX2NpAhYr96uEe6IeSNSZ9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024411 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dtLcbm-BvrF-poUw-P8wK-mlch-Xot4-XRgIij', 'scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a', 'scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDcTuA-YoxY-RB14-ZrH1-jOQP-Bnc2-CbHAFd', 'scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140', 'scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024883 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f', 'scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.024989 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025008 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025031 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025050 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025056 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.025072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025087 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025099 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025117 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025126 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc', 'dm-uuid-LVM-ytups1pI5RQScR8es6EC2ehzveRarGHlbFqc4V4MjMzJo3TlgtjjYi6IsQ2GV1XY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L0Uoew-tdG5-5o2e-uK3H-Tk0g-iUQ0-9OmC0S', 'scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3', 'scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025146 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14', 'dm-uuid-LVM-41xUQUmZVztKsWiHhnpwo6xNJtTVNfNAFjLeRlfZIUjvsJzby2C0fsQozgJh83BM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025156 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025176 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025183 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025213 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ePsnht-YeWJ-Lf9E-hAE9-dAcD-3nfo-eUnWxm', 'scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62', 'scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025220 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025232 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4', 'scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025239 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025251 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025258 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.025273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025281 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CAozE0-JMkL-sS2s-sKDL-CQKZ-VNnx-KvTVaZ', 'scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d', 'scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pucggk-7A71-e7n9-I93l-XDiI-evfo-q9vyJA', 'scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd', 'scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025308 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6', 'scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025315 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:45:16.025322 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.025329 | orchestrator | 2025-06-02 17:45:16.025336 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 17:45:16.025343 | orchestrator | Monday 02 June 2025 17:34:11 +0000 (0:00:03.474) 0:00:44.421 *********** 2025-06-02 17:45:16.025350 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.025357 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.025363 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.025373 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.025380 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.025386 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.025393 | orchestrator | 2025-06-02 17:45:16.025400 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 17:45:16.025406 | orchestrator | Monday 02 June 2025 17:34:13 +0000 (0:00:02.096) 0:00:46.517 *********** 2025-06-02 17:45:16.025413 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.025420 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.025427 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.025433 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.025446 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.025452 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.025459 | orchestrator | 2025-06-02 17:45:16.025465 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:45:16.025472 | orchestrator | Monday 02 June 2025 17:34:15 +0000 (0:00:02.156) 0:00:48.674 *********** 2025-06-02 17:45:16.025482 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.025494 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.025504 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.025521 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.025534 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.025545 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.025555 | orchestrator | 2025-06-02 17:45:16.025566 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:45:16.025577 | orchestrator | Monday 02 June 2025 17:34:16 +0000 (0:00:01.144) 0:00:49.819 *********** 2025-06-02 17:45:16.025588 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.025599 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.025609 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.025619 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.025629 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.025639 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.025650 | orchestrator | 2025-06-02 17:45:16.025660 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:45:16.025670 | orchestrator | Monday 02 June 2025 17:34:17 +0000 (0:00:00.725) 0:00:50.545 *********** 2025-06-02 17:45:16.025681 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.025691 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.025702 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.025712 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.025723 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.025736 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.025747 | orchestrator | 2025-06-02 17:45:16.025758 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:45:16.025769 | orchestrator | Monday 02 June 2025 17:34:19 +0000 (0:00:01.876) 0:00:52.421 *********** 2025-06-02 17:45:16.025779 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.025791 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.025801 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.025812 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.025823 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.025834 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.025845 | orchestrator | 2025-06-02 17:45:16.025857 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 17:45:16.025867 | orchestrator | Monday 02 June 2025 17:34:20 +0000 (0:00:00.811) 0:00:53.233 *********** 2025-06-02 17:45:16.025880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.025887 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-02 17:45:16.025927 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-02 17:45:16.025934 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 17:45:16.025941 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-02 17:45:16.025948 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 17:45:16.025955 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 17:45:16.025961 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-02 17:45:16.025968 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 17:45:16.025975 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-02 17:45:16.025981 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 17:45:16.025988 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 17:45:16.025994 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 17:45:16.026076 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-02 17:45:16.026101 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 17:45:16.026115 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 17:45:16.026127 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 17:45:16.026138 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 17:45:16.026150 | orchestrator | 2025-06-02 17:45:16.026162 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 17:45:16.026173 | orchestrator | Monday 02 June 2025 17:34:24 +0000 (0:00:04.081) 0:00:57.315 *********** 2025-06-02 17:45:16.026185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.026198 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.026209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.026221 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.026231 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-02 17:45:16.026243 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-02 17:45:16.026254 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-02 17:45:16.026265 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-02 17:45:16.026275 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-02 17:45:16.026286 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-02 17:45:16.026297 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.026308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:45:16.026329 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:45:16.026340 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.026350 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:45:16.026361 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:45:16.026372 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:45:16.026383 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:45:16.026394 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.026404 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.026416 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:45:16.026427 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:45:16.026439 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:45:16.026450 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.026460 | orchestrator | 2025-06-02 17:45:16.026472 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 17:45:16.026483 | orchestrator | Monday 02 June 2025 17:34:25 +0000 (0:00:01.205) 0:00:58.521 *********** 2025-06-02 17:45:16.026494 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.026505 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.026516 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.026528 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.026540 | orchestrator | 2025-06-02 17:45:16.026552 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:45:16.026565 | orchestrator | Monday 02 June 2025 17:34:27 +0000 (0:00:02.381) 0:01:00.902 *********** 2025-06-02 17:45:16.026575 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.026586 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.026598 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.026609 | orchestrator | 2025-06-02 17:45:16.026620 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:45:16.026631 | orchestrator | Monday 02 June 2025 17:34:28 +0000 (0:00:00.772) 0:01:01.675 *********** 2025-06-02 17:45:16.026654 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.026665 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.026676 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.026687 | orchestrator | 2025-06-02 17:45:16.026699 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:45:16.026710 | orchestrator | Monday 02 June 2025 17:34:29 +0000 (0:00:01.241) 0:01:02.916 *********** 2025-06-02 17:45:16.026720 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.026731 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.026743 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.026755 | orchestrator | 2025-06-02 17:45:16.026765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:45:16.026776 | orchestrator | Monday 02 June 2025 17:34:31 +0000 (0:00:01.133) 0:01:04.050 *********** 2025-06-02 17:45:16.026788 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.026800 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.026812 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.026823 | orchestrator | 2025-06-02 17:45:16.026835 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:45:16.026846 | orchestrator | Monday 02 June 2025 17:34:32 +0000 (0:00:01.322) 0:01:05.373 *********** 2025-06-02 17:45:16.026856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.026867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.026880 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.026978 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.026994 | orchestrator | 2025-06-02 17:45:16.027004 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:45:16.027015 | orchestrator | Monday 02 June 2025 17:34:33 +0000 (0:00:00.751) 0:01:06.124 *********** 2025-06-02 17:45:16.027026 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.027036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.027047 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.027063 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027075 | orchestrator | 2025-06-02 17:45:16.027087 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:45:16.027098 | orchestrator | Monday 02 June 2025 17:34:33 +0000 (0:00:00.647) 0:01:06.772 *********** 2025-06-02 17:45:16.027109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.027120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.027132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.027143 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027154 | orchestrator | 2025-06-02 17:45:16.027164 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:45:16.027175 | orchestrator | Monday 02 June 2025 17:34:35 +0000 (0:00:01.321) 0:01:08.094 *********** 2025-06-02 17:45:16.027186 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.027196 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.027209 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.027220 | orchestrator | 2025-06-02 17:45:16.027232 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:45:16.027244 | orchestrator | Monday 02 June 2025 17:34:36 +0000 (0:00:01.064) 0:01:09.158 *********** 2025-06-02 17:45:16.027256 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:45:16.027266 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:45:16.027278 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:45:16.027286 | orchestrator | 2025-06-02 17:45:16.027293 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 17:45:16.027299 | orchestrator | Monday 02 June 2025 17:34:37 +0000 (0:00:01.427) 0:01:10.586 *********** 2025-06-02 17:45:16.027325 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.027341 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.027349 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.027355 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 17:45:16.027362 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:45:16.027369 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:45:16.027375 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:45:16.027382 | orchestrator | 2025-06-02 17:45:16.027388 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 17:45:16.027395 | orchestrator | Monday 02 June 2025 17:34:38 +0000 (0:00:01.196) 0:01:11.783 *********** 2025-06-02 17:45:16.027402 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.027409 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.027416 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.027423 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-06-02 17:45:16.027430 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:45:16.027437 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:45:16.027444 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:45:16.027450 | orchestrator | 2025-06-02 17:45:16.027457 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.027464 | orchestrator | Monday 02 June 2025 17:34:41 +0000 (0:00:02.600) 0:01:14.383 *********** 2025-06-02 17:45:16.027472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.027480 | orchestrator | 2025-06-02 17:45:16.027487 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.027494 | orchestrator | Monday 02 June 2025 17:34:43 +0000 (0:00:01.715) 0:01:16.099 *********** 2025-06-02 17:45:16.027501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.027507 | orchestrator | 2025-06-02 17:45:16.027514 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.027521 | orchestrator | Monday 02 June 2025 17:34:44 +0000 (0:00:01.640) 0:01:17.739 *********** 2025-06-02 17:45:16.027528 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.027534 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.027541 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027548 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.027555 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.027562 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.027568 | orchestrator | 2025-06-02 17:45:16.027575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.027582 | orchestrator | Monday 02 June 2025 17:34:46 +0000 (0:00:01.520) 0:01:19.260 *********** 2025-06-02 17:45:16.027589 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.027595 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.027602 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.027609 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.027616 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.027623 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.027630 | orchestrator | 2025-06-02 17:45:16.027637 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.027649 | orchestrator | Monday 02 June 2025 17:34:48 +0000 (0:00:01.860) 0:01:21.120 *********** 2025-06-02 17:45:16.027661 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.027669 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.027676 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.027683 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.027690 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.027697 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.027704 | orchestrator | 2025-06-02 17:45:16.027711 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.027718 | orchestrator | Monday 02 June 2025 17:34:50 +0000 (0:00:02.115) 0:01:23.236 *********** 2025-06-02 17:45:16.027724 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.027731 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.027738 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.027745 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.027751 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.027758 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.027764 | orchestrator | 2025-06-02 17:45:16.027771 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.027778 | orchestrator | Monday 02 June 2025 17:34:51 +0000 (0:00:01.611) 0:01:24.848 *********** 2025-06-02 17:45:16.027784 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027791 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.027799 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.027806 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.027813 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.027820 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.027827 | orchestrator | 2025-06-02 17:45:16.027833 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.027840 | orchestrator | Monday 02 June 2025 17:34:53 +0000 (0:00:01.477) 0:01:26.326 *********** 2025-06-02 17:45:16.027854 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.027861 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.027869 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.027876 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027883 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.027890 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.027921 | orchestrator | 2025-06-02 17:45:16.027933 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.027945 | orchestrator | Monday 02 June 2025 17:34:54 +0000 (0:00:00.900) 0:01:27.227 *********** 2025-06-02 17:45:16.027956 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.027967 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.027979 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.027986 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.027993 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028006 | orchestrator | 2025-06-02 17:45:16.028013 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.028020 | orchestrator | Monday 02 June 2025 17:34:55 +0000 (0:00:01.160) 0:01:28.387 *********** 2025-06-02 17:45:16.028027 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.028034 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.028040 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.028047 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.028054 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.028060 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.028067 | orchestrator | 2025-06-02 17:45:16.028074 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.028080 | orchestrator | Monday 02 June 2025 17:34:56 +0000 (0:00:01.598) 0:01:29.985 *********** 2025-06-02 17:45:16.028087 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.028094 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.028107 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.028114 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.028120 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.028127 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.028134 | orchestrator | 2025-06-02 17:45:16.028141 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.028147 | orchestrator | Monday 02 June 2025 17:34:58 +0000 (0:00:01.376) 0:01:31.361 *********** 2025-06-02 17:45:16.028154 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028161 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028167 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028174 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.028181 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028187 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028193 | orchestrator | 2025-06-02 17:45:16.028200 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.028207 | orchestrator | Monday 02 June 2025 17:34:58 +0000 (0:00:00.584) 0:01:31.946 *********** 2025-06-02 17:45:16.028214 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.028221 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.028227 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.028234 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.028241 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028247 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028254 | orchestrator | 2025-06-02 17:45:16.028261 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.028268 | orchestrator | Monday 02 June 2025 17:34:59 +0000 (0:00:00.931) 0:01:32.878 *********** 2025-06-02 17:45:16.028275 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028281 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028288 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028295 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.028304 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.028316 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.028327 | orchestrator | 2025-06-02 17:45:16.028339 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.028350 | orchestrator | Monday 02 June 2025 17:35:00 +0000 (0:00:00.732) 0:01:33.611 *********** 2025-06-02 17:45:16.028362 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028373 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028383 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028394 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.028406 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.028464 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.028474 | orchestrator | 2025-06-02 17:45:16.028486 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.028505 | orchestrator | Monday 02 June 2025 17:35:01 +0000 (0:00:00.904) 0:01:34.515 *********** 2025-06-02 17:45:16.028517 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028527 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028538 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028575 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.028587 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.028597 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.028608 | orchestrator | 2025-06-02 17:45:16.028618 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.028628 | orchestrator | Monday 02 June 2025 17:35:02 +0000 (0:00:00.615) 0:01:35.131 *********** 2025-06-02 17:45:16.028639 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028649 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028659 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028669 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.028679 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028690 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028710 | orchestrator | 2025-06-02 17:45:16.028721 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.028731 | orchestrator | Monday 02 June 2025 17:35:02 +0000 (0:00:00.845) 0:01:35.976 *********** 2025-06-02 17:45:16.028741 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.028751 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.028761 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.028772 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.028782 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028793 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028804 | orchestrator | 2025-06-02 17:45:16.028815 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.028836 | orchestrator | Monday 02 June 2025 17:35:03 +0000 (0:00:00.576) 0:01:36.552 *********** 2025-06-02 17:45:16.028848 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.028857 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.028868 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.028880 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.028912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.028924 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.028935 | orchestrator | 2025-06-02 17:45:16.028945 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.028955 | orchestrator | Monday 02 June 2025 17:35:04 +0000 (0:00:00.828) 0:01:37.381 *********** 2025-06-02 17:45:16.028966 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.028977 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.028987 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.028998 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.029008 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.029018 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.029029 | orchestrator | 2025-06-02 17:45:16.029039 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.029050 | orchestrator | Monday 02 June 2025 17:35:04 +0000 (0:00:00.618) 0:01:38.000 *********** 2025-06-02 17:45:16.029060 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.029071 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.029081 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.029091 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.029101 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.029111 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.029122 | orchestrator | 2025-06-02 17:45:16.029132 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-02 17:45:16.029142 | orchestrator | Monday 02 June 2025 17:35:06 +0000 (0:00:01.287) 0:01:39.287 *********** 2025-06-02 17:45:16.029153 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.029163 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.029173 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.029183 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.029193 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.029203 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.029213 | orchestrator | 2025-06-02 17:45:16.029224 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-02 17:45:16.029233 | orchestrator | Monday 02 June 2025 17:35:08 +0000 (0:00:01.863) 0:01:41.151 *********** 2025-06-02 17:45:16.029245 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.029255 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.029267 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.029278 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.029289 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.029301 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.029312 | orchestrator | 2025-06-02 17:45:16.029324 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-02 17:45:16.029332 | orchestrator | Monday 02 June 2025 17:35:10 +0000 (0:00:01.923) 0:01:43.075 *********** 2025-06-02 17:45:16.029339 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.029354 | orchestrator | 2025-06-02 17:45:16.029361 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-02 17:45:16.029368 | orchestrator | Monday 02 June 2025 17:35:11 +0000 (0:00:01.233) 0:01:44.308 *********** 2025-06-02 17:45:16.029374 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.029381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.029388 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.029394 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.029400 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.029407 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.029413 | orchestrator | 2025-06-02 17:45:16.029420 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-02 17:45:16.029427 | orchestrator | Monday 02 June 2025 17:35:12 +0000 (0:00:00.820) 0:01:45.129 *********** 2025-06-02 17:45:16.029433 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.029440 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.029446 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.029453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.029459 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.029466 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.029477 | orchestrator | 2025-06-02 17:45:16.029488 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-02 17:45:16.029513 | orchestrator | Monday 02 June 2025 17:35:12 +0000 (0:00:00.599) 0:01:45.729 *********** 2025-06-02 17:45:16.029524 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029535 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029545 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029555 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029565 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029575 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029586 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029596 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-02 17:45:16.029607 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029617 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029626 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029635 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-02 17:45:16.029645 | orchestrator | 2025-06-02 17:45:16.029664 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-02 17:45:16.029675 | orchestrator | Monday 02 June 2025 17:35:14 +0000 (0:00:01.553) 0:01:47.282 *********** 2025-06-02 17:45:16.029685 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.029696 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.029706 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.029718 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.029728 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.029738 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.029749 | orchestrator | 2025-06-02 17:45:16.029759 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-02 17:45:16.029769 | orchestrator | Monday 02 June 2025 17:35:15 +0000 (0:00:00.972) 0:01:48.255 *********** 2025-06-02 17:45:16.029780 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.029790 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.029808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.029820 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.029831 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.029842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.029853 | orchestrator | 2025-06-02 17:45:16.029862 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-02 17:45:16.029869 | orchestrator | Monday 02 June 2025 17:35:16 +0000 (0:00:00.809) 0:01:49.064 *********** 2025-06-02 17:45:16.029876 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.029882 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.029889 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.029979 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.029992 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030002 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.030013 | orchestrator | 2025-06-02 17:45:16.030079 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-02 17:45:16.030091 | orchestrator | Monday 02 June 2025 17:35:16 +0000 (0:00:00.524) 0:01:49.589 *********** 2025-06-02 17:45:16.030103 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030115 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.030127 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.030140 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.030149 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030156 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.030164 | orchestrator | 2025-06-02 17:45:16.030175 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-02 17:45:16.030186 | orchestrator | Monday 02 June 2025 17:35:17 +0000 (0:00:00.722) 0:01:50.312 *********** 2025-06-02 17:45:16.030198 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.030210 | orchestrator | 2025-06-02 17:45:16.030220 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-02 17:45:16.030230 | orchestrator | Monday 02 June 2025 17:35:18 +0000 (0:00:01.155) 0:01:51.467 *********** 2025-06-02 17:45:16.030239 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.030250 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.030260 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.030270 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.030280 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.030291 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.030302 | orchestrator | 2025-06-02 17:45:16.030312 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-02 17:45:16.030323 | orchestrator | Monday 02 June 2025 17:36:15 +0000 (0:00:57.168) 0:02:48.635 *********** 2025-06-02 17:45:16.030335 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030346 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030356 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030366 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030377 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030388 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030398 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030426 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.030439 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030449 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030461 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030485 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.030496 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030508 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030519 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030531 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.030540 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030546 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030553 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030560 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030566 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-02 17:45:16.030573 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-02 17:45:16.030580 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-02 17:45:16.030607 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.030614 | orchestrator | 2025-06-02 17:45:16.030621 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-02 17:45:16.030628 | orchestrator | Monday 02 June 2025 17:36:16 +0000 (0:00:01.010) 0:02:49.646 *********** 2025-06-02 17:45:16.030634 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030644 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.030654 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.030666 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.030677 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030688 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.030697 | orchestrator | 2025-06-02 17:45:16.030707 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-02 17:45:16.030718 | orchestrator | Monday 02 June 2025 17:36:17 +0000 (0:00:00.783) 0:02:50.429 *********** 2025-06-02 17:45:16.030729 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030740 | orchestrator | 2025-06-02 17:45:16.030751 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-02 17:45:16.030763 | orchestrator | Monday 02 June 2025 17:36:17 +0000 (0:00:00.173) 0:02:50.603 *********** 2025-06-02 17:45:16.030774 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030785 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.030796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.030807 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.030819 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030830 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.030842 | orchestrator | 2025-06-02 17:45:16.030854 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-02 17:45:16.030866 | orchestrator | Monday 02 June 2025 17:36:18 +0000 (0:00:01.048) 0:02:51.651 *********** 2025-06-02 17:45:16.030877 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.030888 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.030961 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.030973 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.030983 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.030993 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031004 | orchestrator | 2025-06-02 17:45:16.031015 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-02 17:45:16.031025 | orchestrator | Monday 02 June 2025 17:36:19 +0000 (0:00:00.750) 0:02:52.402 *********** 2025-06-02 17:45:16.031035 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031045 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031055 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031066 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031075 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031190 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031204 | orchestrator | 2025-06-02 17:45:16.031215 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-02 17:45:16.031226 | orchestrator | Monday 02 June 2025 17:36:20 +0000 (0:00:00.939) 0:02:53.341 *********** 2025-06-02 17:45:16.031236 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.031247 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.031258 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.031268 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.031279 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.031289 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.031299 | orchestrator | 2025-06-02 17:45:16.031309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-02 17:45:16.031320 | orchestrator | Monday 02 June 2025 17:36:22 +0000 (0:00:02.531) 0:02:55.873 *********** 2025-06-02 17:45:16.031330 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.031340 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.031350 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.031360 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.031371 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.031380 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.031391 | orchestrator | 2025-06-02 17:45:16.031401 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-02 17:45:16.031412 | orchestrator | Monday 02 June 2025 17:36:24 +0000 (0:00:01.166) 0:02:57.039 *********** 2025-06-02 17:45:16.031423 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.031436 | orchestrator | 2025-06-02 17:45:16.031446 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-02 17:45:16.031465 | orchestrator | Monday 02 June 2025 17:36:25 +0000 (0:00:01.637) 0:02:58.676 *********** 2025-06-02 17:45:16.031475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031506 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031516 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031527 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031537 | orchestrator | 2025-06-02 17:45:16.031548 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-02 17:45:16.031558 | orchestrator | Monday 02 June 2025 17:36:26 +0000 (0:00:00.881) 0:02:59.558 *********** 2025-06-02 17:45:16.031568 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031579 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031589 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031600 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031611 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031623 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031633 | orchestrator | 2025-06-02 17:45:16.031643 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-02 17:45:16.031654 | orchestrator | Monday 02 June 2025 17:36:27 +0000 (0:00:01.153) 0:03:00.711 *********** 2025-06-02 17:45:16.031665 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031675 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031695 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031706 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031716 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031727 | orchestrator | 2025-06-02 17:45:16.031737 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-02 17:45:16.031759 | orchestrator | Monday 02 June 2025 17:36:28 +0000 (0:00:00.818) 0:03:01.530 *********** 2025-06-02 17:45:16.031770 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031780 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031799 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031830 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031841 | orchestrator | 2025-06-02 17:45:16.031852 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-02 17:45:16.031862 | orchestrator | Monday 02 June 2025 17:36:29 +0000 (0:00:00.790) 0:03:02.321 *********** 2025-06-02 17:45:16.031873 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031885 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031912 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.031924 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.031934 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.031945 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.031957 | orchestrator | 2025-06-02 17:45:16.031964 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-02 17:45:16.031971 | orchestrator | Monday 02 June 2025 17:36:29 +0000 (0:00:00.646) 0:03:02.967 *********** 2025-06-02 17:45:16.031978 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.031984 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.031994 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.032006 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.032017 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.032028 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.032040 | orchestrator | 2025-06-02 17:45:16.032051 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-02 17:45:16.032063 | orchestrator | Monday 02 June 2025 17:36:30 +0000 (0:00:00.824) 0:03:03.791 *********** 2025-06-02 17:45:16.032075 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.032087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.032098 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.032109 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.032119 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.032126 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.032133 | orchestrator | 2025-06-02 17:45:16.032139 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-02 17:45:16.032146 | orchestrator | Monday 02 June 2025 17:36:31 +0000 (0:00:00.659) 0:03:04.451 *********** 2025-06-02 17:45:16.032152 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.032159 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.032165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.032172 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.032178 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.032184 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.032191 | orchestrator | 2025-06-02 17:45:16.032197 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-02 17:45:16.032204 | orchestrator | Monday 02 June 2025 17:36:32 +0000 (0:00:00.837) 0:03:05.289 *********** 2025-06-02 17:45:16.032210 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.032217 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.032223 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.032230 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.032237 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.032243 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.032250 | orchestrator | 2025-06-02 17:45:16.032256 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-02 17:45:16.032263 | orchestrator | Monday 02 June 2025 17:36:33 +0000 (0:00:01.212) 0:03:06.501 *********** 2025-06-02 17:45:16.032270 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.032282 | orchestrator | 2025-06-02 17:45:16.032292 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-02 17:45:16.032317 | orchestrator | Monday 02 June 2025 17:36:34 +0000 (0:00:01.127) 0:03:07.629 *********** 2025-06-02 17:45:16.032335 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-02 17:45:16.032345 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-02 17:45:16.032354 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-02 17:45:16.032364 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-02 17:45:16.032380 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032391 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032401 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-02 17:45:16.032410 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032419 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-02 17:45:16.032428 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032438 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032448 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032457 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032477 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-02 17:45:16.032487 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032498 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032508 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032519 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032530 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032541 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-02 17:45:16.032562 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032570 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032577 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032590 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032596 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032603 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-02 17:45:16.032610 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032616 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032623 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032630 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032636 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032643 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-02 17:45:16.032649 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032656 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032662 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032668 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032675 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032681 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-02 17:45:16.032688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032695 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032701 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032740 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032751 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-02 17:45:16.032767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032780 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032792 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032802 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032814 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-02 17:45:16.032826 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032836 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032849 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.032861 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.032873 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032884 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.032915 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-02 17:45:16.032927 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.032938 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.032950 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.032961 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.032972 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.032983 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.032994 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-02 17:45:16.033013 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033024 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033036 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.033048 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033060 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033071 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-02 17:45:16.033083 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033094 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033105 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033117 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033130 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033142 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-02 17:45:16.033155 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033167 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033197 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033207 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033217 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-02 17:45:16.033237 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-02 17:45:16.033247 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-02 17:45:16.033258 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-02 17:45:16.033268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033279 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-02 17:45:16.033289 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-02 17:45:16.033300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-02 17:45:16.033311 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-02 17:45:16.033322 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-02 17:45:16.033334 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-02 17:45:16.033345 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-02 17:45:16.033355 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-02 17:45:16.033365 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-02 17:45:16.033376 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-02 17:45:16.033386 | orchestrator | 2025-06-02 17:45:16.033397 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-02 17:45:16.033407 | orchestrator | Monday 02 June 2025 17:36:41 +0000 (0:00:06.707) 0:03:14.336 *********** 2025-06-02 17:45:16.033419 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.033430 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.033440 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.033452 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.033463 | orchestrator | 2025-06-02 17:45:16.033474 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-02 17:45:16.033485 | orchestrator | Monday 02 June 2025 17:36:42 +0000 (0:00:01.256) 0:03:15.592 *********** 2025-06-02 17:45:16.033495 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033507 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033518 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033529 | orchestrator | 2025-06-02 17:45:16.033540 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-02 17:45:16.033550 | orchestrator | Monday 02 June 2025 17:36:43 +0000 (0:00:00.723) 0:03:16.316 *********** 2025-06-02 17:45:16.033561 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033572 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033583 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.033593 | orchestrator | 2025-06-02 17:45:16.033604 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-02 17:45:16.033615 | orchestrator | Monday 02 June 2025 17:36:45 +0000 (0:00:02.036) 0:03:18.353 *********** 2025-06-02 17:45:16.033626 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.033636 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.033647 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.033663 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.033674 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.033686 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.033696 | orchestrator | 2025-06-02 17:45:16.033734 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-02 17:45:16.033746 | orchestrator | Monday 02 June 2025 17:36:46 +0000 (0:00:00.981) 0:03:19.335 *********** 2025-06-02 17:45:16.033757 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.033768 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.033779 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.033790 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.033800 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.033811 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.033822 | orchestrator | 2025-06-02 17:45:16.033832 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-02 17:45:16.033843 | orchestrator | Monday 02 June 2025 17:36:47 +0000 (0:00:01.082) 0:03:20.418 *********** 2025-06-02 17:45:16.033854 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.033864 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.033875 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.033885 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.033912 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.033923 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.033934 | orchestrator | 2025-06-02 17:45:16.033945 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-02 17:45:16.033955 | orchestrator | Monday 02 June 2025 17:36:48 +0000 (0:00:00.681) 0:03:21.099 *********** 2025-06-02 17:45:16.033965 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.033974 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.033993 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034004 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.034148 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.034168 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.034179 | orchestrator | 2025-06-02 17:45:16.034190 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-02 17:45:16.034200 | orchestrator | Monday 02 June 2025 17:36:49 +0000 (0:00:00.933) 0:03:22.032 *********** 2025-06-02 17:45:16.034211 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034222 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034232 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034243 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.034253 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.034264 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.034274 | orchestrator | 2025-06-02 17:45:16.034285 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-02 17:45:16.034297 | orchestrator | Monday 02 June 2025 17:36:49 +0000 (0:00:00.757) 0:03:22.790 *********** 2025-06-02 17:45:16.034307 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034317 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.034350 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.034360 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.034371 | orchestrator | 2025-06-02 17:45:16.034382 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-02 17:45:16.034393 | orchestrator | Monday 02 June 2025 17:36:50 +0000 (0:00:01.132) 0:03:23.923 *********** 2025-06-02 17:45:16.034403 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034413 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034424 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034434 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.034445 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.034455 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.034466 | orchestrator | 2025-06-02 17:45:16.034477 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-02 17:45:16.034487 | orchestrator | Monday 02 June 2025 17:36:51 +0000 (0:00:00.685) 0:03:24.608 *********** 2025-06-02 17:45:16.034507 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034529 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034539 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.034550 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.034561 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.034572 | orchestrator | 2025-06-02 17:45:16.034582 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-02 17:45:16.034593 | orchestrator | Monday 02 June 2025 17:36:52 +0000 (0:00:00.859) 0:03:25.467 *********** 2025-06-02 17:45:16.034604 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034615 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034626 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034636 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.034646 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.034657 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.034668 | orchestrator | 2025-06-02 17:45:16.034678 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-02 17:45:16.034689 | orchestrator | Monday 02 June 2025 17:36:56 +0000 (0:00:04.008) 0:03:29.475 *********** 2025-06-02 17:45:16.034700 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034710 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034721 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034731 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.034742 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.034753 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.034764 | orchestrator | 2025-06-02 17:45:16.034774 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-02 17:45:16.034785 | orchestrator | Monday 02 June 2025 17:36:57 +0000 (0:00:01.141) 0:03:30.617 *********** 2025-06-02 17:45:16.034796 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034817 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034827 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.034838 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.034848 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.034859 | orchestrator | 2025-06-02 17:45:16.034876 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-02 17:45:16.034886 | orchestrator | Monday 02 June 2025 17:36:58 +0000 (0:00:00.788) 0:03:31.406 *********** 2025-06-02 17:45:16.034957 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.034968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.034979 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.034990 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035001 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035012 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035022 | orchestrator | 2025-06-02 17:45:16.035032 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-02 17:45:16.035043 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:00.877) 0:03:32.283 *********** 2025-06-02 17:45:16.035054 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035064 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035074 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035085 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.035097 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.035108 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.035120 | orchestrator | 2025-06-02 17:45:16.035131 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-02 17:45:16.035196 | orchestrator | Monday 02 June 2025 17:36:59 +0000 (0:00:00.668) 0:03:32.951 *********** 2025-06-02 17:45:16.035210 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035221 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035231 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035245 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-02 17:45:16.035259 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-02 17:45:16.035271 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-02 17:45:16.035281 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-02 17:45:16.035293 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035303 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035313 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-02 17:45:16.035324 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-02 17:45:16.035336 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035347 | orchestrator | 2025-06-02 17:45:16.035357 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-02 17:45:16.035368 | orchestrator | Monday 02 June 2025 17:37:00 +0000 (0:00:01.014) 0:03:33.966 *********** 2025-06-02 17:45:16.035379 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035402 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035413 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035423 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035433 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035443 | orchestrator | 2025-06-02 17:45:16.035454 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-02 17:45:16.035465 | orchestrator | Monday 02 June 2025 17:37:01 +0000 (0:00:00.642) 0:03:34.608 *********** 2025-06-02 17:45:16.035475 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035497 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035507 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035528 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035539 | orchestrator | 2025-06-02 17:45:16.035561 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:45:16.035581 | orchestrator | Monday 02 June 2025 17:37:02 +0000 (0:00:00.870) 0:03:35.479 *********** 2025-06-02 17:45:16.035591 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035601 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035623 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035633 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035643 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035653 | orchestrator | 2025-06-02 17:45:16.035664 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:45:16.035675 | orchestrator | Monday 02 June 2025 17:37:03 +0000 (0:00:00.690) 0:03:36.169 *********** 2025-06-02 17:45:16.035685 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035696 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035707 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035717 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035727 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035737 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035748 | orchestrator | 2025-06-02 17:45:16.035759 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:45:16.035769 | orchestrator | Monday 02 June 2025 17:37:04 +0000 (0:00:00.897) 0:03:37.067 *********** 2025-06-02 17:45:16.035781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035791 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035802 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035844 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.035857 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.035866 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.035877 | orchestrator | 2025-06-02 17:45:16.035888 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:45:16.035926 | orchestrator | Monday 02 June 2025 17:37:04 +0000 (0:00:00.760) 0:03:37.827 *********** 2025-06-02 17:45:16.035937 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.035947 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.035958 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.035968 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.035978 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.035989 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.035999 | orchestrator | 2025-06-02 17:45:16.036010 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:45:16.036021 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:01.196) 0:03:39.024 *********** 2025-06-02 17:45:16.036032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:45:16.036042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:45:16.036053 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:45:16.036063 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.036074 | orchestrator | 2025-06-02 17:45:16.036085 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:45:16.036095 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.404) 0:03:39.429 *********** 2025-06-02 17:45:16.036105 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:45:16.036116 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:45:16.036127 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:45:16.036137 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.036148 | orchestrator | 2025-06-02 17:45:16.036159 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:45:16.036169 | orchestrator | Monday 02 June 2025 17:37:06 +0000 (0:00:00.520) 0:03:39.950 *********** 2025-06-02 17:45:16.036179 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-02 17:45:16.036190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-02 17:45:16.036209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-02 17:45:16.036219 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.036230 | orchestrator | 2025-06-02 17:45:16.036241 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:45:16.036251 | orchestrator | Monday 02 June 2025 17:37:07 +0000 (0:00:00.457) 0:03:40.408 *********** 2025-06-02 17:45:16.036262 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.036272 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.036284 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.036294 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.036304 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.036314 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.036325 | orchestrator | 2025-06-02 17:45:16.036336 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:45:16.036346 | orchestrator | Monday 02 June 2025 17:37:08 +0000 (0:00:00.923) 0:03:41.331 *********** 2025-06-02 17:45:16.036356 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-02 17:45:16.036367 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-02 17:45:16.036377 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.036386 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.036396 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-02 17:45:16.036405 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.036415 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:45:16.036425 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:45:16.036435 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:45:16.036446 | orchestrator | 2025-06-02 17:45:16.036456 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-02 17:45:16.036467 | orchestrator | Monday 02 June 2025 17:37:10 +0000 (0:00:02.341) 0:03:43.673 *********** 2025-06-02 17:45:16.036476 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.036485 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.036493 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.036502 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.036511 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.036521 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.036531 | orchestrator | 2025-06-02 17:45:16.036570 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.036582 | orchestrator | Monday 02 June 2025 17:37:13 +0000 (0:00:03.280) 0:03:46.953 *********** 2025-06-02 17:45:16.036592 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.036600 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.036609 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.036618 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.036627 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.036636 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.036645 | orchestrator | 2025-06-02 17:45:16.036655 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 17:45:16.036664 | orchestrator | Monday 02 June 2025 17:37:14 +0000 (0:00:01.026) 0:03:47.979 *********** 2025-06-02 17:45:16.036674 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.036685 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.036696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.036706 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.036716 | orchestrator | 2025-06-02 17:45:16.036725 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 17:45:16.036736 | orchestrator | Monday 02 June 2025 17:37:16 +0000 (0:00:01.409) 0:03:49.389 *********** 2025-06-02 17:45:16.036746 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.036756 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.036766 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.036775 | orchestrator | 2025-06-02 17:45:16.036795 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 17:45:16.036863 | orchestrator | Monday 02 June 2025 17:37:16 +0000 (0:00:00.348) 0:03:49.737 *********** 2025-06-02 17:45:16.036874 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.036885 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.036920 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.036930 | orchestrator | 2025-06-02 17:45:16.036941 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 17:45:16.036950 | orchestrator | Monday 02 June 2025 17:37:18 +0000 (0:00:01.856) 0:03:51.594 *********** 2025-06-02 17:45:16.036961 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.036971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.036981 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.036990 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.037000 | orchestrator | 2025-06-02 17:45:16.037010 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 17:45:16.037020 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:00.665) 0:03:52.260 *********** 2025-06-02 17:45:16.037029 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.037038 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.037048 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.037057 | orchestrator | 2025-06-02 17:45:16.037067 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 17:45:16.037077 | orchestrator | Monday 02 June 2025 17:37:19 +0000 (0:00:00.367) 0:03:52.628 *********** 2025-06-02 17:45:16.037086 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.037096 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.037107 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.037117 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.037126 | orchestrator | 2025-06-02 17:45:16.037136 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 17:45:16.037147 | orchestrator | Monday 02 June 2025 17:37:20 +0000 (0:00:01.118) 0:03:53.747 *********** 2025-06-02 17:45:16.037157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.037166 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.037177 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.037187 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037198 | orchestrator | 2025-06-02 17:45:16.037208 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 17:45:16.037217 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:00.457) 0:03:54.204 *********** 2025-06-02 17:45:16.037228 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037237 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.037246 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.037256 | orchestrator | 2025-06-02 17:45:16.037266 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 17:45:16.037276 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:00.403) 0:03:54.608 *********** 2025-06-02 17:45:16.037286 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037295 | orchestrator | 2025-06-02 17:45:16.037306 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 17:45:16.037315 | orchestrator | Monday 02 June 2025 17:37:21 +0000 (0:00:00.235) 0:03:54.843 *********** 2025-06-02 17:45:16.037325 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037335 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.037345 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.037355 | orchestrator | 2025-06-02 17:45:16.037364 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 17:45:16.037374 | orchestrator | Monday 02 June 2025 17:37:22 +0000 (0:00:00.319) 0:03:55.163 *********** 2025-06-02 17:45:16.037393 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037403 | orchestrator | 2025-06-02 17:45:16.037413 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 17:45:16.037423 | orchestrator | Monday 02 June 2025 17:37:22 +0000 (0:00:00.207) 0:03:55.370 *********** 2025-06-02 17:45:16.037433 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037443 | orchestrator | 2025-06-02 17:45:16.037453 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 17:45:16.037463 | orchestrator | Monday 02 June 2025 17:37:22 +0000 (0:00:00.244) 0:03:55.615 *********** 2025-06-02 17:45:16.037480 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037490 | orchestrator | 2025-06-02 17:45:16.037499 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 17:45:16.037510 | orchestrator | Monday 02 June 2025 17:37:22 +0000 (0:00:00.389) 0:03:56.004 *********** 2025-06-02 17:45:16.037519 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037529 | orchestrator | 2025-06-02 17:45:16.037539 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 17:45:16.037550 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:00.239) 0:03:56.243 *********** 2025-06-02 17:45:16.037559 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037569 | orchestrator | 2025-06-02 17:45:16.037579 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 17:45:16.037589 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:00.224) 0:03:56.468 *********** 2025-06-02 17:45:16.037598 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.037609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.037618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.037628 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037638 | orchestrator | 2025-06-02 17:45:16.037648 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 17:45:16.037658 | orchestrator | Monday 02 June 2025 17:37:23 +0000 (0:00:00.399) 0:03:56.867 *********** 2025-06-02 17:45:16.037668 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037678 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.037688 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.037697 | orchestrator | 2025-06-02 17:45:16.037741 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 17:45:16.037753 | orchestrator | Monday 02 June 2025 17:37:24 +0000 (0:00:00.343) 0:03:57.210 *********** 2025-06-02 17:45:16.037763 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037772 | orchestrator | 2025-06-02 17:45:16.037782 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 17:45:16.037792 | orchestrator | Monday 02 June 2025 17:37:24 +0000 (0:00:00.254) 0:03:57.465 *********** 2025-06-02 17:45:16.037802 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.037813 | orchestrator | 2025-06-02 17:45:16.037823 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 17:45:16.037832 | orchestrator | Monday 02 June 2025 17:37:24 +0000 (0:00:00.246) 0:03:57.711 *********** 2025-06-02 17:45:16.037843 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.037853 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.037862 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.037872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.037882 | orchestrator | 2025-06-02 17:45:16.037946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 17:45:16.037958 | orchestrator | Monday 02 June 2025 17:37:25 +0000 (0:00:01.148) 0:03:58.860 *********** 2025-06-02 17:45:16.037968 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.037977 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.037987 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.037996 | orchestrator | 2025-06-02 17:45:16.038044 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 17:45:16.038057 | orchestrator | Monday 02 June 2025 17:37:26 +0000 (0:00:00.346) 0:03:59.207 *********** 2025-06-02 17:45:16.038067 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.038076 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.038086 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.038096 | orchestrator | 2025-06-02 17:45:16.038106 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 17:45:16.038116 | orchestrator | Monday 02 June 2025 17:37:27 +0000 (0:00:01.235) 0:04:00.442 *********** 2025-06-02 17:45:16.038126 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.038136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.038146 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.038156 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.038166 | orchestrator | 2025-06-02 17:45:16.038176 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 17:45:16.038186 | orchestrator | Monday 02 June 2025 17:37:28 +0000 (0:00:01.193) 0:04:01.635 *********** 2025-06-02 17:45:16.038196 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.038205 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.038216 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.038225 | orchestrator | 2025-06-02 17:45:16.038235 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 17:45:16.038246 | orchestrator | Monday 02 June 2025 17:37:29 +0000 (0:00:00.400) 0:04:02.035 *********** 2025-06-02 17:45:16.038255 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.038265 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.038275 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.038339 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.038352 | orchestrator | 2025-06-02 17:45:16.038362 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 17:45:16.038372 | orchestrator | Monday 02 June 2025 17:37:30 +0000 (0:00:01.137) 0:04:03.173 *********** 2025-06-02 17:45:16.038382 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.038392 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.038402 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.038412 | orchestrator | 2025-06-02 17:45:16.038423 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 17:45:16.038433 | orchestrator | Monday 02 June 2025 17:37:30 +0000 (0:00:00.435) 0:04:03.608 *********** 2025-06-02 17:45:16.038442 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.038453 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.038463 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.038473 | orchestrator | 2025-06-02 17:45:16.038483 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 17:45:16.038500 | orchestrator | Monday 02 June 2025 17:37:31 +0000 (0:00:01.291) 0:04:04.900 *********** 2025-06-02 17:45:16.038509 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.038519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.038529 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.038538 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.038546 | orchestrator | 2025-06-02 17:45:16.038555 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 17:45:16.038564 | orchestrator | Monday 02 June 2025 17:37:32 +0000 (0:00:00.871) 0:04:05.772 *********** 2025-06-02 17:45:16.038572 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.038606 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.038615 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.038624 | orchestrator | 2025-06-02 17:45:16.038633 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-02 17:45:16.038648 | orchestrator | Monday 02 June 2025 17:37:33 +0000 (0:00:00.388) 0:04:06.161 *********** 2025-06-02 17:45:16.038656 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.038665 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.038674 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.038682 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.038691 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.038700 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.038709 | orchestrator | 2025-06-02 17:45:16.038717 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 17:45:16.038726 | orchestrator | Monday 02 June 2025 17:37:34 +0000 (0:00:00.896) 0:04:07.057 *********** 2025-06-02 17:45:16.038770 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.038780 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.038789 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.038796 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.038805 | orchestrator | 2025-06-02 17:45:16.038813 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 17:45:16.038821 | orchestrator | Monday 02 June 2025 17:37:35 +0000 (0:00:01.070) 0:04:08.128 *********** 2025-06-02 17:45:16.038829 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.038838 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.038846 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.038854 | orchestrator | 2025-06-02 17:45:16.038862 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 17:45:16.038871 | orchestrator | Monday 02 June 2025 17:37:35 +0000 (0:00:00.359) 0:04:08.488 *********** 2025-06-02 17:45:16.038879 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.038887 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.038910 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.038919 | orchestrator | 2025-06-02 17:45:16.038927 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 17:45:16.038936 | orchestrator | Monday 02 June 2025 17:37:36 +0000 (0:00:01.235) 0:04:09.723 *********** 2025-06-02 17:45:16.038945 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.038954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.038962 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.038971 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.038981 | orchestrator | 2025-06-02 17:45:16.038990 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 17:45:16.038999 | orchestrator | Monday 02 June 2025 17:37:37 +0000 (0:00:00.939) 0:04:10.663 *********** 2025-06-02 17:45:16.039008 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039017 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039026 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039035 | orchestrator | 2025-06-02 17:45:16.039044 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-02 17:45:16.039052 | orchestrator | 2025-06-02 17:45:16.039060 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.039069 | orchestrator | Monday 02 June 2025 17:37:38 +0000 (0:00:00.920) 0:04:11.583 *********** 2025-06-02 17:45:16.039078 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.039087 | orchestrator | 2025-06-02 17:45:16.039095 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.039103 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:00.552) 0:04:12.136 *********** 2025-06-02 17:45:16.039112 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.039120 | orchestrator | 2025-06-02 17:45:16.039129 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.039145 | orchestrator | Monday 02 June 2025 17:37:39 +0000 (0:00:00.815) 0:04:12.951 *********** 2025-06-02 17:45:16.039154 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039163 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039172 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039180 | orchestrator | 2025-06-02 17:45:16.039189 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.039198 | orchestrator | Monday 02 June 2025 17:37:40 +0000 (0:00:00.742) 0:04:13.694 *********** 2025-06-02 17:45:16.039207 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039216 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039225 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039234 | orchestrator | 2025-06-02 17:45:16.039244 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.039253 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:00.335) 0:04:14.029 *********** 2025-06-02 17:45:16.039262 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039271 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039288 | orchestrator | 2025-06-02 17:45:16.039297 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.039310 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:00.311) 0:04:14.341 *********** 2025-06-02 17:45:16.039319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039328 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039346 | orchestrator | 2025-06-02 17:45:16.039355 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.039363 | orchestrator | Monday 02 June 2025 17:37:41 +0000 (0:00:00.616) 0:04:14.957 *********** 2025-06-02 17:45:16.039373 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039381 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039390 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039398 | orchestrator | 2025-06-02 17:45:16.039428 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.039439 | orchestrator | Monday 02 June 2025 17:37:42 +0000 (0:00:00.774) 0:04:15.731 *********** 2025-06-02 17:45:16.039448 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039456 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039464 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039472 | orchestrator | 2025-06-02 17:45:16.039480 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.039489 | orchestrator | Monday 02 June 2025 17:37:43 +0000 (0:00:00.355) 0:04:16.087 *********** 2025-06-02 17:45:16.039497 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039522 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039531 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039539 | orchestrator | 2025-06-02 17:45:16.039548 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.039586 | orchestrator | Monday 02 June 2025 17:37:43 +0000 (0:00:00.346) 0:04:16.433 *********** 2025-06-02 17:45:16.039595 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039604 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039612 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039620 | orchestrator | 2025-06-02 17:45:16.039628 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.039636 | orchestrator | Monday 02 June 2025 17:37:44 +0000 (0:00:01.177) 0:04:17.611 *********** 2025-06-02 17:45:16.039645 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039653 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039661 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039670 | orchestrator | 2025-06-02 17:45:16.039678 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.039686 | orchestrator | Monday 02 June 2025 17:37:45 +0000 (0:00:00.987) 0:04:18.599 *********** 2025-06-02 17:45:16.039702 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039710 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039718 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039727 | orchestrator | 2025-06-02 17:45:16.039735 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.039743 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.469) 0:04:19.068 *********** 2025-06-02 17:45:16.039751 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.039760 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.039768 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.039776 | orchestrator | 2025-06-02 17:45:16.039784 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.039793 | orchestrator | Monday 02 June 2025 17:37:46 +0000 (0:00:00.470) 0:04:19.539 *********** 2025-06-02 17:45:16.039801 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039809 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039818 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039826 | orchestrator | 2025-06-02 17:45:16.039835 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.039843 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:00.708) 0:04:20.248 *********** 2025-06-02 17:45:16.039851 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039859 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039868 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039876 | orchestrator | 2025-06-02 17:45:16.039885 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.039907 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:00.374) 0:04:20.622 *********** 2025-06-02 17:45:16.039916 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039925 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039933 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039942 | orchestrator | 2025-06-02 17:45:16.039951 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.039960 | orchestrator | Monday 02 June 2025 17:37:47 +0000 (0:00:00.324) 0:04:20.947 *********** 2025-06-02 17:45:16.039968 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.039978 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.039987 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.039996 | orchestrator | 2025-06-02 17:45:16.040005 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.040014 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:00.324) 0:04:21.271 *********** 2025-06-02 17:45:16.040024 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.040032 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.040041 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.040049 | orchestrator | 2025-06-02 17:45:16.040058 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.040066 | orchestrator | Monday 02 June 2025 17:37:48 +0000 (0:00:00.633) 0:04:21.905 *********** 2025-06-02 17:45:16.040076 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040085 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040094 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040102 | orchestrator | 2025-06-02 17:45:16.040112 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.040123 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.435) 0:04:22.341 *********** 2025-06-02 17:45:16.040129 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040134 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040140 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040145 | orchestrator | 2025-06-02 17:45:16.040150 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.040161 | orchestrator | Monday 02 June 2025 17:37:49 +0000 (0:00:00.420) 0:04:22.762 *********** 2025-06-02 17:45:16.040167 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040179 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040185 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040190 | orchestrator | 2025-06-02 17:45:16.040195 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:45:16.040201 | orchestrator | Monday 02 June 2025 17:37:50 +0000 (0:00:00.941) 0:04:23.703 *********** 2025-06-02 17:45:16.040206 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040211 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040216 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040222 | orchestrator | 2025-06-02 17:45:16.040227 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-02 17:45:16.040232 | orchestrator | Monday 02 June 2025 17:37:51 +0000 (0:00:00.434) 0:04:24.138 *********** 2025-06-02 17:45:16.040238 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.040243 | orchestrator | 2025-06-02 17:45:16.040249 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-02 17:45:16.040254 | orchestrator | Monday 02 June 2025 17:37:51 +0000 (0:00:00.745) 0:04:24.884 *********** 2025-06-02 17:45:16.040259 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.040264 | orchestrator | 2025-06-02 17:45:16.040270 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-02 17:45:16.040275 | orchestrator | Monday 02 June 2025 17:37:52 +0000 (0:00:00.147) 0:04:25.031 *********** 2025-06-02 17:45:16.040280 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-02 17:45:16.040285 | orchestrator | 2025-06-02 17:45:16.040318 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-02 17:45:16.040325 | orchestrator | Monday 02 June 2025 17:37:53 +0000 (0:00:01.754) 0:04:26.785 *********** 2025-06-02 17:45:16.040330 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040335 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040341 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040346 | orchestrator | 2025-06-02 17:45:16.040352 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-02 17:45:16.040357 | orchestrator | Monday 02 June 2025 17:37:54 +0000 (0:00:00.415) 0:04:27.201 *********** 2025-06-02 17:45:16.040362 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040368 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040373 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040379 | orchestrator | 2025-06-02 17:45:16.040384 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-02 17:45:16.040389 | orchestrator | Monday 02 June 2025 17:37:54 +0000 (0:00:00.391) 0:04:27.593 *********** 2025-06-02 17:45:16.040395 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040400 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.040406 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.040411 | orchestrator | 2025-06-02 17:45:16.040416 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-02 17:45:16.040421 | orchestrator | Monday 02 June 2025 17:37:55 +0000 (0:00:01.248) 0:04:28.841 *********** 2025-06-02 17:45:16.040427 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040432 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.040437 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.040443 | orchestrator | 2025-06-02 17:45:16.040448 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-02 17:45:16.040453 | orchestrator | Monday 02 June 2025 17:37:56 +0000 (0:00:01.121) 0:04:29.963 *********** 2025-06-02 17:45:16.040459 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040464 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.040469 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.040475 | orchestrator | 2025-06-02 17:45:16.040480 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-02 17:45:16.040485 | orchestrator | Monday 02 June 2025 17:37:57 +0000 (0:00:00.780) 0:04:30.743 *********** 2025-06-02 17:45:16.040490 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040502 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040510 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040519 | orchestrator | 2025-06-02 17:45:16.040527 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-02 17:45:16.040536 | orchestrator | Monday 02 June 2025 17:37:58 +0000 (0:00:00.673) 0:04:31.416 *********** 2025-06-02 17:45:16.040544 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040553 | orchestrator | 2025-06-02 17:45:16.040561 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-02 17:45:16.040570 | orchestrator | Monday 02 June 2025 17:37:59 +0000 (0:00:01.324) 0:04:32.741 *********** 2025-06-02 17:45:16.040578 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040587 | orchestrator | 2025-06-02 17:45:16.040595 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-02 17:45:16.040604 | orchestrator | Monday 02 June 2025 17:38:00 +0000 (0:00:00.745) 0:04:33.487 *********** 2025-06-02 17:45:16.040612 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.040620 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.040629 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.040636 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:45:16.040644 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-02 17:45:16.040652 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:45:16.040660 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:45:16.040668 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-02 17:45:16.040676 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:45:16.040685 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-02 17:45:16.040693 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-02 17:45:16.040701 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-02 17:45:16.040710 | orchestrator | 2025-06-02 17:45:16.040724 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-02 17:45:16.040733 | orchestrator | Monday 02 June 2025 17:38:04 +0000 (0:00:03.692) 0:04:37.179 *********** 2025-06-02 17:45:16.040742 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040750 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.040759 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.040767 | orchestrator | 2025-06-02 17:45:16.040774 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-02 17:45:16.040782 | orchestrator | Monday 02 June 2025 17:38:05 +0000 (0:00:01.657) 0:04:38.837 *********** 2025-06-02 17:45:16.040791 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040799 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040807 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040815 | orchestrator | 2025-06-02 17:45:16.040823 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-02 17:45:16.040831 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:00.355) 0:04:39.192 *********** 2025-06-02 17:45:16.040840 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.040848 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.040856 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.040865 | orchestrator | 2025-06-02 17:45:16.040874 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-02 17:45:16.040882 | orchestrator | Monday 02 June 2025 17:38:06 +0000 (0:00:00.281) 0:04:39.475 *********** 2025-06-02 17:45:16.040890 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040917 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.040925 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.040934 | orchestrator | 2025-06-02 17:45:16.040941 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-02 17:45:16.040976 | orchestrator | Monday 02 June 2025 17:38:08 +0000 (0:00:01.875) 0:04:41.350 *********** 2025-06-02 17:45:16.040990 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.040998 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.041006 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.041014 | orchestrator | 2025-06-02 17:45:16.041022 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-02 17:45:16.041030 | orchestrator | Monday 02 June 2025 17:38:09 +0000 (0:00:01.585) 0:04:42.935 *********** 2025-06-02 17:45:16.041038 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041046 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041053 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041062 | orchestrator | 2025-06-02 17:45:16.041069 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-02 17:45:16.041077 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:00.316) 0:04:43.251 *********** 2025-06-02 17:45:16.041085 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.041093 | orchestrator | 2025-06-02 17:45:16.041101 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-02 17:45:16.041108 | orchestrator | Monday 02 June 2025 17:38:10 +0000 (0:00:00.606) 0:04:43.858 *********** 2025-06-02 17:45:16.041117 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041124 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041132 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041140 | orchestrator | 2025-06-02 17:45:16.041148 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-02 17:45:16.041156 | orchestrator | Monday 02 June 2025 17:38:11 +0000 (0:00:00.469) 0:04:44.327 *********** 2025-06-02 17:45:16.041164 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041171 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041180 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041187 | orchestrator | 2025-06-02 17:45:16.041195 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-02 17:45:16.041203 | orchestrator | Monday 02 June 2025 17:38:11 +0000 (0:00:00.279) 0:04:44.607 *********** 2025-06-02 17:45:16.041211 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.041219 | orchestrator | 2025-06-02 17:45:16.041226 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-02 17:45:16.041234 | orchestrator | Monday 02 June 2025 17:38:12 +0000 (0:00:00.482) 0:04:45.090 *********** 2025-06-02 17:45:16.041242 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.041250 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.041258 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.041266 | orchestrator | 2025-06-02 17:45:16.041274 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-02 17:45:16.041282 | orchestrator | Monday 02 June 2025 17:38:14 +0000 (0:00:02.087) 0:04:47.177 *********** 2025-06-02 17:45:16.041289 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.041297 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.041305 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.041313 | orchestrator | 2025-06-02 17:45:16.041320 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-02 17:45:16.041328 | orchestrator | Monday 02 June 2025 17:38:15 +0000 (0:00:01.471) 0:04:48.649 *********** 2025-06-02 17:45:16.041336 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.041343 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.041352 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.041359 | orchestrator | 2025-06-02 17:45:16.041367 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-02 17:45:16.041375 | orchestrator | Monday 02 June 2025 17:38:17 +0000 (0:00:01.962) 0:04:50.611 *********** 2025-06-02 17:45:16.041383 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.041390 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.041404 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.041411 | orchestrator | 2025-06-02 17:45:16.041419 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-02 17:45:16.041427 | orchestrator | Monday 02 June 2025 17:38:19 +0000 (0:00:01.968) 0:04:52.580 *********** 2025-06-02 17:45:16.041435 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.041443 | orchestrator | 2025-06-02 17:45:16.041455 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-02 17:45:16.041463 | orchestrator | Monday 02 June 2025 17:38:20 +0000 (0:00:01.041) 0:04:53.621 *********** 2025-06-02 17:45:16.041471 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-02 17:45:16.041479 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.041487 | orchestrator | 2025-06-02 17:45:16.041494 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-02 17:45:16.041502 | orchestrator | Monday 02 June 2025 17:38:42 +0000 (0:00:21.963) 0:05:15.585 *********** 2025-06-02 17:45:16.041510 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.041518 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.041526 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.041533 | orchestrator | 2025-06-02 17:45:16.041541 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-02 17:45:16.041549 | orchestrator | Monday 02 June 2025 17:38:52 +0000 (0:00:10.388) 0:05:25.973 *********** 2025-06-02 17:45:16.041557 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041564 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041572 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041580 | orchestrator | 2025-06-02 17:45:16.041588 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-02 17:45:16.041596 | orchestrator | Monday 02 June 2025 17:38:53 +0000 (0:00:00.640) 0:05:26.614 *********** 2025-06-02 17:45:16.041632 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-02 17:45:16.041642 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-02 17:45:16.041651 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-02 17:45:16.041660 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-02 17:45:16.041668 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-02 17:45:16.041681 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__af22be7ebfbf0dad9d461fa5c63aaee3ed414983'}])  2025-06-02 17:45:16.041691 | orchestrator | 2025-06-02 17:45:16.041698 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.041706 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:13.872) 0:05:40.486 *********** 2025-06-02 17:45:16.041713 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041720 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041728 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041735 | orchestrator | 2025-06-02 17:45:16.041743 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-02 17:45:16.041750 | orchestrator | Monday 02 June 2025 17:39:07 +0000 (0:00:00.344) 0:05:40.831 *********** 2025-06-02 17:45:16.041758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.041765 | orchestrator | 2025-06-02 17:45:16.041772 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-02 17:45:16.041780 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:00.795) 0:05:41.627 *********** 2025-06-02 17:45:16.041787 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.041795 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.041806 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.041814 | orchestrator | 2025-06-02 17:45:16.041822 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-02 17:45:16.041830 | orchestrator | Monday 02 June 2025 17:39:08 +0000 (0:00:00.347) 0:05:41.974 *********** 2025-06-02 17:45:16.041838 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041846 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.041854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.041862 | orchestrator | 2025-06-02 17:45:16.041867 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-02 17:45:16.041872 | orchestrator | Monday 02 June 2025 17:39:09 +0000 (0:00:00.333) 0:05:42.308 *********** 2025-06-02 17:45:16.041877 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.041881 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.041886 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.041933 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.041941 | orchestrator | 2025-06-02 17:45:16.041946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-02 17:45:16.041950 | orchestrator | Monday 02 June 2025 17:39:10 +0000 (0:00:00.902) 0:05:43.210 *********** 2025-06-02 17:45:16.041955 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.041960 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.041964 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.041969 | orchestrator | 2025-06-02 17:45:16.041974 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-02 17:45:16.041979 | orchestrator | 2025-06-02 17:45:16.041984 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.042039 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:00.816) 0:05:44.027 *********** 2025-06-02 17:45:16.042051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.042059 | orchestrator | 2025-06-02 17:45:16.042064 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.042068 | orchestrator | Monday 02 June 2025 17:39:11 +0000 (0:00:00.534) 0:05:44.562 *********** 2025-06-02 17:45:16.042079 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.042084 | orchestrator | 2025-06-02 17:45:16.042089 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.042094 | orchestrator | Monday 02 June 2025 17:39:12 +0000 (0:00:00.787) 0:05:45.349 *********** 2025-06-02 17:45:16.042100 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042108 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042116 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042124 | orchestrator | 2025-06-02 17:45:16.042131 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.042139 | orchestrator | Monday 02 June 2025 17:39:13 +0000 (0:00:00.725) 0:05:46.075 *********** 2025-06-02 17:45:16.042147 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042154 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042162 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042169 | orchestrator | 2025-06-02 17:45:16.042176 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.042184 | orchestrator | Monday 02 June 2025 17:39:13 +0000 (0:00:00.393) 0:05:46.469 *********** 2025-06-02 17:45:16.042191 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042199 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042206 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042213 | orchestrator | 2025-06-02 17:45:16.042220 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.042228 | orchestrator | Monday 02 June 2025 17:39:14 +0000 (0:00:00.819) 0:05:47.288 *********** 2025-06-02 17:45:16.042235 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042243 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042250 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042258 | orchestrator | 2025-06-02 17:45:16.042265 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.042273 | orchestrator | Monday 02 June 2025 17:39:14 +0000 (0:00:00.506) 0:05:47.795 *********** 2025-06-02 17:45:16.042280 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042287 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042294 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042302 | orchestrator | 2025-06-02 17:45:16.042309 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.042317 | orchestrator | Monday 02 June 2025 17:39:15 +0000 (0:00:00.960) 0:05:48.756 *********** 2025-06-02 17:45:16.042324 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042331 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042346 | orchestrator | 2025-06-02 17:45:16.042353 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.042361 | orchestrator | Monday 02 June 2025 17:39:16 +0000 (0:00:00.305) 0:05:49.062 *********** 2025-06-02 17:45:16.042369 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042377 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042384 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042391 | orchestrator | 2025-06-02 17:45:16.042399 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.042406 | orchestrator | Monday 02 June 2025 17:39:16 +0000 (0:00:00.673) 0:05:49.736 *********** 2025-06-02 17:45:16.042413 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042422 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042429 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042436 | orchestrator | 2025-06-02 17:45:16.042443 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.042451 | orchestrator | Monday 02 June 2025 17:39:17 +0000 (0:00:00.723) 0:05:50.460 *********** 2025-06-02 17:45:16.042458 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042466 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042479 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042487 | orchestrator | 2025-06-02 17:45:16.042500 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.042508 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:00.733) 0:05:51.193 *********** 2025-06-02 17:45:16.042515 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042523 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042531 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042538 | orchestrator | 2025-06-02 17:45:16.042546 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.042553 | orchestrator | Monday 02 June 2025 17:39:18 +0000 (0:00:00.289) 0:05:51.482 *********** 2025-06-02 17:45:16.042561 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042568 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042577 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042584 | orchestrator | 2025-06-02 17:45:16.042592 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.042599 | orchestrator | Monday 02 June 2025 17:39:19 +0000 (0:00:00.601) 0:05:52.083 *********** 2025-06-02 17:45:16.042607 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042614 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042622 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042630 | orchestrator | 2025-06-02 17:45:16.042638 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.042645 | orchestrator | Monday 02 June 2025 17:39:19 +0000 (0:00:00.322) 0:05:52.406 *********** 2025-06-02 17:45:16.042653 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042660 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042668 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042675 | orchestrator | 2025-06-02 17:45:16.042683 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.042725 | orchestrator | Monday 02 June 2025 17:39:19 +0000 (0:00:00.319) 0:05:52.725 *********** 2025-06-02 17:45:16.042734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042742 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042750 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042757 | orchestrator | 2025-06-02 17:45:16.042766 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.042773 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:00.313) 0:05:53.039 *********** 2025-06-02 17:45:16.042781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042789 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042796 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042804 | orchestrator | 2025-06-02 17:45:16.042812 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.042821 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:00.601) 0:05:53.640 *********** 2025-06-02 17:45:16.042829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.042836 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.042844 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.042852 | orchestrator | 2025-06-02 17:45:16.042859 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.042866 | orchestrator | Monday 02 June 2025 17:39:20 +0000 (0:00:00.309) 0:05:53.950 *********** 2025-06-02 17:45:16.042874 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042882 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042890 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042915 | orchestrator | 2025-06-02 17:45:16.042922 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.042930 | orchestrator | Monday 02 June 2025 17:39:21 +0000 (0:00:00.337) 0:05:54.288 *********** 2025-06-02 17:45:16.042937 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042945 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.042952 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.042960 | orchestrator | 2025-06-02 17:45:16.042975 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.042982 | orchestrator | Monday 02 June 2025 17:39:21 +0000 (0:00:00.346) 0:05:54.634 *********** 2025-06-02 17:45:16.042989 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.042997 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.043004 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.043011 | orchestrator | 2025-06-02 17:45:16.043019 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:45:16.043026 | orchestrator | Monday 02 June 2025 17:39:22 +0000 (0:00:00.816) 0:05:55.451 *********** 2025-06-02 17:45:16.043033 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:45:16.043041 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.043049 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.043056 | orchestrator | 2025-06-02 17:45:16.043064 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-02 17:45:16.043071 | orchestrator | Monday 02 June 2025 17:39:23 +0000 (0:00:00.661) 0:05:56.112 *********** 2025-06-02 17:45:16.043078 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.043086 | orchestrator | 2025-06-02 17:45:16.043093 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-02 17:45:16.043101 | orchestrator | Monday 02 June 2025 17:39:23 +0000 (0:00:00.591) 0:05:56.703 *********** 2025-06-02 17:45:16.043108 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.043116 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.043124 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.043131 | orchestrator | 2025-06-02 17:45:16.043138 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-02 17:45:16.043146 | orchestrator | Monday 02 June 2025 17:39:24 +0000 (0:00:00.983) 0:05:57.687 *********** 2025-06-02 17:45:16.043153 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043160 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043168 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.043175 | orchestrator | 2025-06-02 17:45:16.043182 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-02 17:45:16.043190 | orchestrator | Monday 02 June 2025 17:39:25 +0000 (0:00:00.373) 0:05:58.060 *********** 2025-06-02 17:45:16.043198 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.043211 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.043218 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.043225 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-02 17:45:16.043233 | orchestrator | 2025-06-02 17:45:16.043240 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-02 17:45:16.043248 | orchestrator | Monday 02 June 2025 17:39:35 +0000 (0:00:10.530) 0:06:08.591 *********** 2025-06-02 17:45:16.043255 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.043263 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.043270 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.043277 | orchestrator | 2025-06-02 17:45:16.043285 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-02 17:45:16.043292 | orchestrator | Monday 02 June 2025 17:39:36 +0000 (0:00:00.465) 0:06:09.056 *********** 2025-06-02 17:45:16.043300 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:45:16.043307 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:45:16.043314 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:45:16.043322 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.043329 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.043336 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.043344 | orchestrator | 2025-06-02 17:45:16.043356 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:45:16.043363 | orchestrator | Monday 02 June 2025 17:39:39 +0000 (0:00:03.143) 0:06:12.200 *********** 2025-06-02 17:45:16.043402 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:45:16.043409 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:45:16.043417 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:45:16.043425 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:45:16.043433 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-02 17:45:16.043441 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-02 17:45:16.043449 | orchestrator | 2025-06-02 17:45:16.043456 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-02 17:45:16.043465 | orchestrator | Monday 02 June 2025 17:39:40 +0000 (0:00:01.276) 0:06:13.476 *********** 2025-06-02 17:45:16.043470 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.043474 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.043479 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.043484 | orchestrator | 2025-06-02 17:45:16.043489 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-02 17:45:16.043493 | orchestrator | Monday 02 June 2025 17:39:41 +0000 (0:00:00.777) 0:06:14.254 *********** 2025-06-02 17:45:16.043498 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043503 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043507 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.043512 | orchestrator | 2025-06-02 17:45:16.043517 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-02 17:45:16.043521 | orchestrator | Monday 02 June 2025 17:39:41 +0000 (0:00:00.405) 0:06:14.660 *********** 2025-06-02 17:45:16.043526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043531 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043535 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.043540 | orchestrator | 2025-06-02 17:45:16.043545 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-02 17:45:16.043550 | orchestrator | Monday 02 June 2025 17:39:42 +0000 (0:00:00.660) 0:06:15.320 *********** 2025-06-02 17:45:16.043554 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.043559 | orchestrator | 2025-06-02 17:45:16.043564 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-02 17:45:16.043568 | orchestrator | Monday 02 June 2025 17:39:42 +0000 (0:00:00.543) 0:06:15.864 *********** 2025-06-02 17:45:16.043573 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043582 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.043587 | orchestrator | 2025-06-02 17:45:16.043592 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-02 17:45:16.043596 | orchestrator | Monday 02 June 2025 17:39:43 +0000 (0:00:00.430) 0:06:16.295 *********** 2025-06-02 17:45:16.043601 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043606 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.043615 | orchestrator | 2025-06-02 17:45:16.043620 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-02 17:45:16.043624 | orchestrator | Monday 02 June 2025 17:39:43 +0000 (0:00:00.338) 0:06:16.634 *********** 2025-06-02 17:45:16.043629 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.043634 | orchestrator | 2025-06-02 17:45:16.043638 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-02 17:45:16.043643 | orchestrator | Monday 02 June 2025 17:39:44 +0000 (0:00:00.836) 0:06:17.471 *********** 2025-06-02 17:45:16.043648 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.043652 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.043661 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.043666 | orchestrator | 2025-06-02 17:45:16.043671 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-02 17:45:16.043675 | orchestrator | Monday 02 June 2025 17:39:45 +0000 (0:00:01.217) 0:06:18.689 *********** 2025-06-02 17:45:16.043680 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.043685 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.043689 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.043694 | orchestrator | 2025-06-02 17:45:16.043699 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-02 17:45:16.043703 | orchestrator | Monday 02 June 2025 17:39:46 +0000 (0:00:01.017) 0:06:19.707 *********** 2025-06-02 17:45:16.043712 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.043716 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.043721 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.043726 | orchestrator | 2025-06-02 17:45:16.043731 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-02 17:45:16.043735 | orchestrator | Monday 02 June 2025 17:39:48 +0000 (0:00:02.032) 0:06:21.739 *********** 2025-06-02 17:45:16.043740 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.043745 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.043749 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.043754 | orchestrator | 2025-06-02 17:45:16.043759 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-02 17:45:16.043764 | orchestrator | Monday 02 June 2025 17:39:50 +0000 (0:00:01.990) 0:06:23.730 *********** 2025-06-02 17:45:16.043768 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.043773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.043778 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-02 17:45:16.043782 | orchestrator | 2025-06-02 17:45:16.043787 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-02 17:45:16.043792 | orchestrator | Monday 02 June 2025 17:39:51 +0000 (0:00:00.498) 0:06:24.228 *********** 2025-06-02 17:45:16.043797 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-02 17:45:16.043801 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-02 17:45:16.043825 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-02 17:45:16.043831 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-02 17:45:16.043836 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-02 17:45:16.043841 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.043845 | orchestrator | 2025-06-02 17:45:16.043850 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-02 17:45:16.043855 | orchestrator | Monday 02 June 2025 17:40:21 +0000 (0:00:30.599) 0:06:54.828 *********** 2025-06-02 17:45:16.043860 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.043864 | orchestrator | 2025-06-02 17:45:16.043869 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-02 17:45:16.043874 | orchestrator | Monday 02 June 2025 17:40:23 +0000 (0:00:01.711) 0:06:56.539 *********** 2025-06-02 17:45:16.043879 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.043883 | orchestrator | 2025-06-02 17:45:16.043888 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-02 17:45:16.043929 | orchestrator | Monday 02 June 2025 17:40:24 +0000 (0:00:00.898) 0:06:57.438 *********** 2025-06-02 17:45:16.043935 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.043940 | orchestrator | 2025-06-02 17:45:16.043944 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-02 17:45:16.043956 | orchestrator | Monday 02 June 2025 17:40:24 +0000 (0:00:00.151) 0:06:57.589 *********** 2025-06-02 17:45:16.043961 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-02 17:45:16.043966 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-02 17:45:16.043971 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-02 17:45:16.043975 | orchestrator | 2025-06-02 17:45:16.043980 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-02 17:45:16.043987 | orchestrator | Monday 02 June 2025 17:40:31 +0000 (0:00:06.931) 0:07:04.521 *********** 2025-06-02 17:45:16.043995 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-02 17:45:16.044003 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-02 17:45:16.044011 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-02 17:45:16.044019 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-02 17:45:16.044027 | orchestrator | 2025-06-02 17:45:16.044032 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.044037 | orchestrator | Monday 02 June 2025 17:40:36 +0000 (0:00:04.836) 0:07:09.358 *********** 2025-06-02 17:45:16.044042 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.044048 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.044056 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.044063 | orchestrator | 2025-06-02 17:45:16.044071 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-02 17:45:16.044078 | orchestrator | Monday 02 June 2025 17:40:37 +0000 (0:00:00.979) 0:07:10.337 *********** 2025-06-02 17:45:16.044086 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:45:16.044094 | orchestrator | 2025-06-02 17:45:16.044101 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-02 17:45:16.044109 | orchestrator | Monday 02 June 2025 17:40:37 +0000 (0:00:00.547) 0:07:10.884 *********** 2025-06-02 17:45:16.044116 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.044124 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.044131 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.044138 | orchestrator | 2025-06-02 17:45:16.044145 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-02 17:45:16.044153 | orchestrator | Monday 02 June 2025 17:40:38 +0000 (0:00:00.324) 0:07:11.209 *********** 2025-06-02 17:45:16.044161 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.044168 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.044175 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.044183 | orchestrator | 2025-06-02 17:45:16.044194 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-02 17:45:16.044202 | orchestrator | Monday 02 June 2025 17:40:39 +0000 (0:00:01.485) 0:07:12.695 *********** 2025-06-02 17:45:16.044209 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-02 17:45:16.044216 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-02 17:45:16.044224 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-02 17:45:16.044231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.044240 | orchestrator | 2025-06-02 17:45:16.044247 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-02 17:45:16.044255 | orchestrator | Monday 02 June 2025 17:40:40 +0000 (0:00:00.642) 0:07:13.337 *********** 2025-06-02 17:45:16.044263 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.044270 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.044278 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.044287 | orchestrator | 2025-06-02 17:45:16.044295 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-02 17:45:16.044303 | orchestrator | 2025-06-02 17:45:16.044311 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.044325 | orchestrator | Monday 02 June 2025 17:40:40 +0000 (0:00:00.565) 0:07:13.903 *********** 2025-06-02 17:45:16.044333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.044341 | orchestrator | 2025-06-02 17:45:16.044349 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.044384 | orchestrator | Monday 02 June 2025 17:40:41 +0000 (0:00:00.760) 0:07:14.664 *********** 2025-06-02 17:45:16.044393 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.044400 | orchestrator | 2025-06-02 17:45:16.044408 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.044415 | orchestrator | Monday 02 June 2025 17:40:42 +0000 (0:00:00.525) 0:07:15.189 *********** 2025-06-02 17:45:16.044424 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044431 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044439 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044446 | orchestrator | 2025-06-02 17:45:16.044455 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.044464 | orchestrator | Monday 02 June 2025 17:40:42 +0000 (0:00:00.289) 0:07:15.478 *********** 2025-06-02 17:45:16.044471 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044479 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044486 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044494 | orchestrator | 2025-06-02 17:45:16.044502 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.044509 | orchestrator | Monday 02 June 2025 17:40:43 +0000 (0:00:01.012) 0:07:16.490 *********** 2025-06-02 17:45:16.044516 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044524 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044531 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044538 | orchestrator | 2025-06-02 17:45:16.044545 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.044553 | orchestrator | Monday 02 June 2025 17:40:44 +0000 (0:00:00.735) 0:07:17.226 *********** 2025-06-02 17:45:16.044560 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044567 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044575 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044582 | orchestrator | 2025-06-02 17:45:16.044589 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.044597 | orchestrator | Monday 02 June 2025 17:40:44 +0000 (0:00:00.742) 0:07:17.969 *********** 2025-06-02 17:45:16.044604 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044611 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044619 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044626 | orchestrator | 2025-06-02 17:45:16.044633 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.044640 | orchestrator | Monday 02 June 2025 17:40:45 +0000 (0:00:00.347) 0:07:18.316 *********** 2025-06-02 17:45:16.044648 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044655 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044663 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044670 | orchestrator | 2025-06-02 17:45:16.044677 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.044685 | orchestrator | Monday 02 June 2025 17:40:46 +0000 (0:00:00.764) 0:07:19.080 *********** 2025-06-02 17:45:16.044692 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044699 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044707 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044714 | orchestrator | 2025-06-02 17:45:16.044721 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.044729 | orchestrator | Monday 02 June 2025 17:40:46 +0000 (0:00:00.402) 0:07:19.483 *********** 2025-06-02 17:45:16.044741 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044749 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044756 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044764 | orchestrator | 2025-06-02 17:45:16.044771 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.044778 | orchestrator | Monday 02 June 2025 17:40:47 +0000 (0:00:00.754) 0:07:20.237 *********** 2025-06-02 17:45:16.044786 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044793 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044800 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044808 | orchestrator | 2025-06-02 17:45:16.044815 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.044823 | orchestrator | Monday 02 June 2025 17:40:47 +0000 (0:00:00.752) 0:07:20.990 *********** 2025-06-02 17:45:16.044830 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044837 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044845 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044852 | orchestrator | 2025-06-02 17:45:16.044859 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.044870 | orchestrator | Monday 02 June 2025 17:40:48 +0000 (0:00:00.582) 0:07:21.572 *********** 2025-06-02 17:45:16.044878 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.044885 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.044910 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.044919 | orchestrator | 2025-06-02 17:45:16.044927 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.044935 | orchestrator | Monday 02 June 2025 17:40:48 +0000 (0:00:00.317) 0:07:21.890 *********** 2025-06-02 17:45:16.044943 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.044951 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.044959 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.044967 | orchestrator | 2025-06-02 17:45:16.044975 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.044983 | orchestrator | Monday 02 June 2025 17:40:49 +0000 (0:00:00.328) 0:07:22.219 *********** 2025-06-02 17:45:16.044992 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045000 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045008 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045016 | orchestrator | 2025-06-02 17:45:16.045025 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.045033 | orchestrator | Monday 02 June 2025 17:40:49 +0000 (0:00:00.299) 0:07:22.518 *********** 2025-06-02 17:45:16.045042 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045048 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045053 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045058 | orchestrator | 2025-06-02 17:45:16.045063 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.045067 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:00.618) 0:07:23.137 *********** 2025-06-02 17:45:16.045077 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045082 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045086 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045091 | orchestrator | 2025-06-02 17:45:16.045096 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.045101 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:00.330) 0:07:23.468 *********** 2025-06-02 17:45:16.045105 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045110 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045115 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045119 | orchestrator | 2025-06-02 17:45:16.045124 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.045129 | orchestrator | Monday 02 June 2025 17:40:50 +0000 (0:00:00.290) 0:07:23.758 *********** 2025-06-02 17:45:16.045134 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045139 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045148 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045153 | orchestrator | 2025-06-02 17:45:16.045158 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.045162 | orchestrator | Monday 02 June 2025 17:40:51 +0000 (0:00:00.275) 0:07:24.034 *********** 2025-06-02 17:45:16.045167 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045172 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045177 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045181 | orchestrator | 2025-06-02 17:45:16.045186 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.045191 | orchestrator | Monday 02 June 2025 17:40:51 +0000 (0:00:00.644) 0:07:24.678 *********** 2025-06-02 17:45:16.045196 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045200 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045205 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045210 | orchestrator | 2025-06-02 17:45:16.045215 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-02 17:45:16.045219 | orchestrator | Monday 02 June 2025 17:40:52 +0000 (0:00:00.555) 0:07:25.233 *********** 2025-06-02 17:45:16.045224 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045229 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045233 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045238 | orchestrator | 2025-06-02 17:45:16.045243 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-02 17:45:16.045248 | orchestrator | Monday 02 June 2025 17:40:52 +0000 (0:00:00.311) 0:07:25.544 *********** 2025-06-02 17:45:16.045253 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:45:16.045258 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:45:16.045262 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:45:16.045267 | orchestrator | 2025-06-02 17:45:16.045272 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-02 17:45:16.045277 | orchestrator | Monday 02 June 2025 17:40:53 +0000 (0:00:00.905) 0:07:26.450 *********** 2025-06-02 17:45:16.045281 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.045286 | orchestrator | 2025-06-02 17:45:16.045291 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-02 17:45:16.045296 | orchestrator | Monday 02 June 2025 17:40:54 +0000 (0:00:00.761) 0:07:27.212 *********** 2025-06-02 17:45:16.045300 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045305 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045310 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045314 | orchestrator | 2025-06-02 17:45:16.045319 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-02 17:45:16.045324 | orchestrator | Monday 02 June 2025 17:40:54 +0000 (0:00:00.299) 0:07:27.511 *********** 2025-06-02 17:45:16.045329 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045334 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045338 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045343 | orchestrator | 2025-06-02 17:45:16.045348 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-02 17:45:16.045352 | orchestrator | Monday 02 June 2025 17:40:54 +0000 (0:00:00.301) 0:07:27.813 *********** 2025-06-02 17:45:16.045357 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045362 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045367 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045371 | orchestrator | 2025-06-02 17:45:16.045379 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-02 17:45:16.045384 | orchestrator | Monday 02 June 2025 17:40:55 +0000 (0:00:00.917) 0:07:28.730 *********** 2025-06-02 17:45:16.045389 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045394 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045404 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045409 | orchestrator | 2025-06-02 17:45:16.045413 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-02 17:45:16.045418 | orchestrator | Monday 02 June 2025 17:40:56 +0000 (0:00:00.346) 0:07:29.077 *********** 2025-06-02 17:45:16.045423 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:45:16.045428 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:45:16.045433 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-02 17:45:16.045437 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:45:16.045442 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:45:16.045447 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-02 17:45:16.045451 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:45:16.045461 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:45:16.045466 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-02 17:45:16.045471 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:45:16.045476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:45:16.045481 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-02 17:45:16.045485 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:45:16.045490 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:45:16.045495 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-02 17:45:16.045499 | orchestrator | 2025-06-02 17:45:16.045504 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-02 17:45:16.045509 | orchestrator | Monday 02 June 2025 17:40:58 +0000 (0:00:02.338) 0:07:31.415 *********** 2025-06-02 17:45:16.045513 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045518 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045523 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045528 | orchestrator | 2025-06-02 17:45:16.045533 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-02 17:45:16.045537 | orchestrator | Monday 02 June 2025 17:40:58 +0000 (0:00:00.297) 0:07:31.712 *********** 2025-06-02 17:45:16.045542 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.045546 | orchestrator | 2025-06-02 17:45:16.045551 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-02 17:45:16.045555 | orchestrator | Monday 02 June 2025 17:40:59 +0000 (0:00:00.802) 0:07:32.515 *********** 2025-06-02 17:45:16.045560 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:45:16.045564 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:45:16.045569 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-02 17:45:16.045573 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-02 17:45:16.045578 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-02 17:45:16.045582 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-02 17:45:16.045587 | orchestrator | 2025-06-02 17:45:16.045591 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-02 17:45:16.045595 | orchestrator | Monday 02 June 2025 17:41:00 +0000 (0:00:01.090) 0:07:33.606 *********** 2025-06-02 17:45:16.045600 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.045607 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.045612 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.045617 | orchestrator | 2025-06-02 17:45:16.045621 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:45:16.045625 | orchestrator | Monday 02 June 2025 17:41:02 +0000 (0:00:02.392) 0:07:35.999 *********** 2025-06-02 17:45:16.045630 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:45:16.045634 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.045639 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.045643 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:45:16.045648 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:45:16.045652 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.045657 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:45:16.045661 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:45:16.045666 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.045670 | orchestrator | 2025-06-02 17:45:16.045675 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-02 17:45:16.045679 | orchestrator | Monday 02 June 2025 17:41:04 +0000 (0:00:01.533) 0:07:37.533 *********** 2025-06-02 17:45:16.045686 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.045691 | orchestrator | 2025-06-02 17:45:16.045696 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-02 17:45:16.045700 | orchestrator | Monday 02 June 2025 17:41:06 +0000 (0:00:02.263) 0:07:39.796 *********** 2025-06-02 17:45:16.045705 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.045709 | orchestrator | 2025-06-02 17:45:16.045714 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-02 17:45:16.045718 | orchestrator | Monday 02 June 2025 17:41:07 +0000 (0:00:00.566) 0:07:40.363 *********** 2025-06-02 17:45:16.045723 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-7944d10b-922c-5cd9-bd54-91ce5496d9bc', 'data_vg': 'ceph-7944d10b-922c-5cd9-bd54-91ce5496d9bc'}) 2025-06-02 17:45:16.045728 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-428bf6aa-16e8-529e-a7f6-02fc5b7007d7', 'data_vg': 'ceph-428bf6aa-16e8-529e-a7f6-02fc5b7007d7'}) 2025-06-02 17:45:16.045733 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8450978f-95f9-56a8-b94f-b89f59985534', 'data_vg': 'ceph-8450978f-95f9-56a8-b94f-b89f59985534'}) 2025-06-02 17:45:16.045741 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-455b12e9-4014-57cf-aec2-de5d805a7d14', 'data_vg': 'ceph-455b12e9-4014-57cf-aec2-de5d805a7d14'}) 2025-06-02 17:45:16.045746 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-4af7f5ab-70f7-5f81-8195-4d6574833a1e', 'data_vg': 'ceph-4af7f5ab-70f7-5f81-8195-4d6574833a1e'}) 2025-06-02 17:45:16.045750 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-26d332e8-3a94-5f56-adf2-82846ed63b84', 'data_vg': 'ceph-26d332e8-3a94-5f56-adf2-82846ed63b84'}) 2025-06-02 17:45:16.045755 | orchestrator | 2025-06-02 17:45:16.045759 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-02 17:45:16.045764 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:41.005) 0:08:21.368 *********** 2025-06-02 17:45:16.045768 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.045773 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.045777 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.045782 | orchestrator | 2025-06-02 17:45:16.045786 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-02 17:45:16.045791 | orchestrator | Monday 02 June 2025 17:41:48 +0000 (0:00:00.585) 0:08:21.954 *********** 2025-06-02 17:45:16.045795 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.045803 | orchestrator | 2025-06-02 17:45:16.045807 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-02 17:45:16.045812 | orchestrator | Monday 02 June 2025 17:41:49 +0000 (0:00:00.545) 0:08:22.499 *********** 2025-06-02 17:45:16.045816 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045821 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045825 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045830 | orchestrator | 2025-06-02 17:45:16.045834 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-02 17:45:16.045839 | orchestrator | Monday 02 June 2025 17:41:50 +0000 (0:00:00.669) 0:08:23.168 *********** 2025-06-02 17:45:16.045843 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.045848 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.045853 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.045860 | orchestrator | 2025-06-02 17:45:16.045868 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-02 17:45:16.045875 | orchestrator | Monday 02 June 2025 17:41:53 +0000 (0:00:02.937) 0:08:26.105 *********** 2025-06-02 17:45:16.045882 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.045888 | orchestrator | 2025-06-02 17:45:16.045909 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-02 17:45:16.045916 | orchestrator | Monday 02 June 2025 17:41:53 +0000 (0:00:00.505) 0:08:26.611 *********** 2025-06-02 17:45:16.045923 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.045929 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.045936 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.045942 | orchestrator | 2025-06-02 17:45:16.045949 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-02 17:45:16.045962 | orchestrator | Monday 02 June 2025 17:41:54 +0000 (0:00:01.184) 0:08:27.796 *********** 2025-06-02 17:45:16.045977 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.045995 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.046048 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.046063 | orchestrator | 2025-06-02 17:45:16.046075 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-02 17:45:16.046089 | orchestrator | Monday 02 June 2025 17:41:56 +0000 (0:00:01.554) 0:08:29.350 *********** 2025-06-02 17:45:16.046103 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.046117 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.046130 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.046143 | orchestrator | 2025-06-02 17:45:16.046156 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-02 17:45:16.046170 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:01.779) 0:08:31.129 *********** 2025-06-02 17:45:16.046184 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046198 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046210 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.046224 | orchestrator | 2025-06-02 17:45:16.046236 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-02 17:45:16.046249 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:00.353) 0:08:31.482 *********** 2025-06-02 17:45:16.046273 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046300 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.046315 | orchestrator | 2025-06-02 17:45:16.046328 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-02 17:45:16.046342 | orchestrator | Monday 02 June 2025 17:41:58 +0000 (0:00:00.317) 0:08:31.800 *********** 2025-06-02 17:45:16.046357 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:45:16.046370 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-06-02 17:45:16.046383 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-06-02 17:45:16.046412 | orchestrator | ok: [testbed-node-4] => (item=2) 2025-06-02 17:45:16.046426 | orchestrator | ok: [testbed-node-5] => (item=1) 2025-06-02 17:45:16.046440 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-06-02 17:45:16.046454 | orchestrator | 2025-06-02 17:45:16.046469 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-02 17:45:16.046483 | orchestrator | Monday 02 June 2025 17:42:00 +0000 (0:00:01.335) 0:08:33.135 *********** 2025-06-02 17:45:16.046497 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 17:45:16.046512 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 17:45:16.046526 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-02 17:45:16.046540 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 17:45:16.046556 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 17:45:16.046571 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-02 17:45:16.046586 | orchestrator | 2025-06-02 17:45:16.046601 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-02 17:45:16.046634 | orchestrator | Monday 02 June 2025 17:42:02 +0000 (0:00:02.315) 0:08:35.451 *********** 2025-06-02 17:45:16.046646 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-02 17:45:16.046653 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-06-02 17:45:16.046659 | orchestrator | changed: [testbed-node-5] => (item=1) 2025-06-02 17:45:16.046666 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-06-02 17:45:16.046672 | orchestrator | changed: [testbed-node-4] => (item=2) 2025-06-02 17:45:16.046679 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-06-02 17:45:16.046686 | orchestrator | 2025-06-02 17:45:16.046693 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-02 17:45:16.046700 | orchestrator | Monday 02 June 2025 17:42:06 +0000 (0:00:03.842) 0:08:39.294 *********** 2025-06-02 17:45:16.046707 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046714 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046720 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.046728 | orchestrator | 2025-06-02 17:45:16.046734 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-02 17:45:16.046741 | orchestrator | Monday 02 June 2025 17:42:09 +0000 (0:00:03.172) 0:08:42.466 *********** 2025-06-02 17:45:16.046748 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046756 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046762 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-02 17:45:16.046770 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.046776 | orchestrator | 2025-06-02 17:45:16.046783 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-02 17:45:16.046791 | orchestrator | Monday 02 June 2025 17:42:22 +0000 (0:00:12.972) 0:08:55.438 *********** 2025-06-02 17:45:16.046798 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046804 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046812 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.046819 | orchestrator | 2025-06-02 17:45:16.046826 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.046833 | orchestrator | Monday 02 June 2025 17:42:23 +0000 (0:00:00.851) 0:08:56.290 *********** 2025-06-02 17:45:16.046840 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.046847 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.046854 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.046862 | orchestrator | 2025-06-02 17:45:16.046870 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-02 17:45:16.046877 | orchestrator | Monday 02 June 2025 17:42:23 +0000 (0:00:00.624) 0:08:56.914 *********** 2025-06-02 17:45:16.046885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.046938 | orchestrator | 2025-06-02 17:45:16.046955 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-02 17:45:16.046974 | orchestrator | Monday 02 June 2025 17:42:24 +0000 (0:00:00.597) 0:08:57.512 *********** 2025-06-02 17:45:16.046981 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.046988 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.046994 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.047002 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047008 | orchestrator | 2025-06-02 17:45:16.047016 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-02 17:45:16.047024 | orchestrator | Monday 02 June 2025 17:42:24 +0000 (0:00:00.411) 0:08:57.923 *********** 2025-06-02 17:45:16.047031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047037 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047045 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047052 | orchestrator | 2025-06-02 17:45:16.047059 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-02 17:45:16.047066 | orchestrator | Monday 02 June 2025 17:42:25 +0000 (0:00:00.296) 0:08:58.219 *********** 2025-06-02 17:45:16.047074 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047081 | orchestrator | 2025-06-02 17:45:16.047088 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-02 17:45:16.047096 | orchestrator | Monday 02 June 2025 17:42:25 +0000 (0:00:00.242) 0:08:58.462 *********** 2025-06-02 17:45:16.047103 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047118 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047125 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047132 | orchestrator | 2025-06-02 17:45:16.047139 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-02 17:45:16.047146 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.608) 0:08:59.071 *********** 2025-06-02 17:45:16.047155 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047162 | orchestrator | 2025-06-02 17:45:16.047169 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-02 17:45:16.047176 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.223) 0:08:59.295 *********** 2025-06-02 17:45:16.047183 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047190 | orchestrator | 2025-06-02 17:45:16.047197 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-02 17:45:16.047204 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.252) 0:08:59.547 *********** 2025-06-02 17:45:16.047211 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047218 | orchestrator | 2025-06-02 17:45:16.047225 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-02 17:45:16.047232 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.148) 0:08:59.696 *********** 2025-06-02 17:45:16.047239 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047246 | orchestrator | 2025-06-02 17:45:16.047253 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-02 17:45:16.047260 | orchestrator | Monday 02 June 2025 17:42:26 +0000 (0:00:00.270) 0:08:59.966 *********** 2025-06-02 17:45:16.047267 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047274 | orchestrator | 2025-06-02 17:45:16.047292 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-02 17:45:16.047299 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:00.326) 0:09:00.293 *********** 2025-06-02 17:45:16.047306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.047313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.047320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.047326 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047333 | orchestrator | 2025-06-02 17:45:16.047340 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-02 17:45:16.047358 | orchestrator | Monday 02 June 2025 17:42:27 +0000 (0:00:00.392) 0:09:00.686 *********** 2025-06-02 17:45:16.047365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047372 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047379 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047386 | orchestrator | 2025-06-02 17:45:16.047393 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-02 17:45:16.047400 | orchestrator | Monday 02 June 2025 17:42:28 +0000 (0:00:00.391) 0:09:01.077 *********** 2025-06-02 17:45:16.047407 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047414 | orchestrator | 2025-06-02 17:45:16.047421 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-02 17:45:16.047428 | orchestrator | Monday 02 June 2025 17:42:28 +0000 (0:00:00.896) 0:09:01.973 *********** 2025-06-02 17:45:16.047435 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047441 | orchestrator | 2025-06-02 17:45:16.047448 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-02 17:45:16.047455 | orchestrator | 2025-06-02 17:45:16.047462 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.047469 | orchestrator | Monday 02 June 2025 17:42:29 +0000 (0:00:00.691) 0:09:02.665 *********** 2025-06-02 17:45:16.047477 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.047486 | orchestrator | 2025-06-02 17:45:16.047493 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.047500 | orchestrator | Monday 02 June 2025 17:42:30 +0000 (0:00:01.311) 0:09:03.976 *********** 2025-06-02 17:45:16.047507 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.047514 | orchestrator | 2025-06-02 17:45:16.047521 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.047527 | orchestrator | Monday 02 June 2025 17:42:32 +0000 (0:00:01.329) 0:09:05.305 *********** 2025-06-02 17:45:16.047534 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047541 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047548 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.047555 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.047562 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.047569 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047576 | orchestrator | 2025-06-02 17:45:16.047583 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.047590 | orchestrator | Monday 02 June 2025 17:42:33 +0000 (0:00:01.051) 0:09:06.357 *********** 2025-06-02 17:45:16.047598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047612 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047619 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.047626 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.047634 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.047641 | orchestrator | 2025-06-02 17:45:16.047649 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.047657 | orchestrator | Monday 02 June 2025 17:42:34 +0000 (0:00:01.133) 0:09:07.491 *********** 2025-06-02 17:45:16.047664 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047679 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047686 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.047691 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.047695 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.047700 | orchestrator | 2025-06-02 17:45:16.047704 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.047713 | orchestrator | Monday 02 June 2025 17:42:35 +0000 (0:00:01.339) 0:09:08.830 *********** 2025-06-02 17:45:16.047722 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047727 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047731 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047736 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.047740 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.047744 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.047749 | orchestrator | 2025-06-02 17:45:16.047753 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.047758 | orchestrator | Monday 02 June 2025 17:42:36 +0000 (0:00:01.011) 0:09:09.841 *********** 2025-06-02 17:45:16.047762 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047767 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047771 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.047775 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.047780 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.047784 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047789 | orchestrator | 2025-06-02 17:45:16.047793 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.047798 | orchestrator | Monday 02 June 2025 17:42:37 +0000 (0:00:00.860) 0:09:10.702 *********** 2025-06-02 17:45:16.047802 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047806 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047811 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047815 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047820 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047824 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047828 | orchestrator | 2025-06-02 17:45:16.047837 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.047842 | orchestrator | Monday 02 June 2025 17:42:38 +0000 (0:00:00.613) 0:09:11.315 *********** 2025-06-02 17:45:16.047846 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047851 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047855 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047859 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.047864 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.047868 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.047873 | orchestrator | 2025-06-02 17:45:16.047877 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.047882 | orchestrator | Monday 02 June 2025 17:42:39 +0000 (0:00:00.863) 0:09:12.179 *********** 2025-06-02 17:45:16.047886 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.047904 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.047909 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.047913 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.047918 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.047923 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.047927 | orchestrator | 2025-06-02 17:45:16.047932 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.047936 | orchestrator | Monday 02 June 2025 17:42:40 +0000 (0:00:00.991) 0:09:13.170 *********** 2025-06-02 17:45:16.047941 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.047945 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.047950 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.047954 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.047958 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.047963 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.047967 | orchestrator | 2025-06-02 17:45:16.047972 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.047976 | orchestrator | Monday 02 June 2025 17:42:41 +0000 (0:00:01.300) 0:09:14.471 *********** 2025-06-02 17:45:16.047981 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.047985 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.047990 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.047994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.048002 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.048006 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.048011 | orchestrator | 2025-06-02 17:45:16.048015 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.048020 | orchestrator | Monday 02 June 2025 17:42:42 +0000 (0:00:00.580) 0:09:15.052 *********** 2025-06-02 17:45:16.048024 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048029 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.048033 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.048038 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.048042 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.048047 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.048051 | orchestrator | 2025-06-02 17:45:16.048055 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.048060 | orchestrator | Monday 02 June 2025 17:42:42 +0000 (0:00:00.811) 0:09:15.864 *********** 2025-06-02 17:45:16.048064 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.048069 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.048073 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.048078 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048082 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048087 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048091 | orchestrator | 2025-06-02 17:45:16.048096 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.048100 | orchestrator | Monday 02 June 2025 17:42:43 +0000 (0:00:00.657) 0:09:16.522 *********** 2025-06-02 17:45:16.048105 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.048109 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.048114 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.048118 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048123 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048127 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048132 | orchestrator | 2025-06-02 17:45:16.048136 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.048141 | orchestrator | Monday 02 June 2025 17:42:44 +0000 (0:00:00.863) 0:09:17.386 *********** 2025-06-02 17:45:16.048145 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.048150 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.048154 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.048159 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048163 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048167 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048172 | orchestrator | 2025-06-02 17:45:16.048179 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.048184 | orchestrator | Monday 02 June 2025 17:42:45 +0000 (0:00:00.668) 0:09:18.054 *********** 2025-06-02 17:45:16.048188 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.048193 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.048200 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.048208 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.048216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.048223 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.048232 | orchestrator | 2025-06-02 17:45:16.048240 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.048248 | orchestrator | Monday 02 June 2025 17:42:46 +0000 (0:00:01.024) 0:09:19.079 *********** 2025-06-02 17:45:16.048256 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:45:16.048264 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:45:16.048272 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:45:16.048280 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.048288 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.048297 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.048304 | orchestrator | 2025-06-02 17:45:16.048312 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.048326 | orchestrator | Monday 02 June 2025 17:42:46 +0000 (0:00:00.632) 0:09:19.711 *********** 2025-06-02 17:45:16.048334 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048342 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.048350 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.048357 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.048365 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.048374 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.048381 | orchestrator | 2025-06-02 17:45:16.048393 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.048402 | orchestrator | Monday 02 June 2025 17:42:47 +0000 (0:00:00.994) 0:09:20.706 *********** 2025-06-02 17:45:16.048409 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048417 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.048425 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.048433 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048440 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048447 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048455 | orchestrator | 2025-06-02 17:45:16.048462 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.048470 | orchestrator | Monday 02 June 2025 17:42:48 +0000 (0:00:00.653) 0:09:21.359 *********** 2025-06-02 17:45:16.048477 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048484 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.048493 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.048501 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048509 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048516 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048523 | orchestrator | 2025-06-02 17:45:16.048530 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-02 17:45:16.048537 | orchestrator | Monday 02 June 2025 17:42:49 +0000 (0:00:01.261) 0:09:22.620 *********** 2025-06-02 17:45:16.048545 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.048552 | orchestrator | 2025-06-02 17:45:16.048559 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-02 17:45:16.048566 | orchestrator | Monday 02 June 2025 17:42:54 +0000 (0:00:04.663) 0:09:27.284 *********** 2025-06-02 17:45:16.048573 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048580 | orchestrator | 2025-06-02 17:45:16.048588 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-02 17:45:16.048595 | orchestrator | Monday 02 June 2025 17:42:56 +0000 (0:00:02.092) 0:09:29.376 *********** 2025-06-02 17:45:16.048602 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048609 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.048616 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.048623 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.048630 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.048638 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.048645 | orchestrator | 2025-06-02 17:45:16.048652 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-02 17:45:16.048660 | orchestrator | Monday 02 June 2025 17:42:58 +0000 (0:00:01.781) 0:09:31.158 *********** 2025-06-02 17:45:16.048667 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.048674 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.048681 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.048688 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.048695 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.048702 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.048709 | orchestrator | 2025-06-02 17:45:16.048717 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-02 17:45:16.048724 | orchestrator | Monday 02 June 2025 17:42:59 +0000 (0:00:01.071) 0:09:32.229 *********** 2025-06-02 17:45:16.048733 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.048746 | orchestrator | 2025-06-02 17:45:16.048753 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-02 17:45:16.048760 | orchestrator | Monday 02 June 2025 17:43:00 +0000 (0:00:01.263) 0:09:33.493 *********** 2025-06-02 17:45:16.048767 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.048775 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.048782 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.048789 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.048796 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.048803 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.048810 | orchestrator | 2025-06-02 17:45:16.048818 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-02 17:45:16.048825 | orchestrator | Monday 02 June 2025 17:43:02 +0000 (0:00:01.837) 0:09:35.331 *********** 2025-06-02 17:45:16.048832 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.048839 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.048847 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.048854 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.048861 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.048868 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.048875 | orchestrator | 2025-06-02 17:45:16.048882 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-02 17:45:16.048889 | orchestrator | Monday 02 June 2025 17:43:05 +0000 (0:00:03.475) 0:09:38.806 *********** 2025-06-02 17:45:16.048912 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.048919 | orchestrator | 2025-06-02 17:45:16.048926 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-02 17:45:16.048933 | orchestrator | Monday 02 June 2025 17:43:07 +0000 (0:00:01.307) 0:09:40.113 *********** 2025-06-02 17:45:16.048940 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.048947 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.048954 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.048961 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.048968 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.048975 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.048983 | orchestrator | 2025-06-02 17:45:16.048990 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-02 17:45:16.048998 | orchestrator | Monday 02 June 2025 17:43:07 +0000 (0:00:00.830) 0:09:40.943 *********** 2025-06-02 17:45:16.049005 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:45:16.049012 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:45:16.049020 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:45:16.049026 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.049031 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.049036 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.049040 | orchestrator | 2025-06-02 17:45:16.049045 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-02 17:45:16.049054 | orchestrator | Monday 02 June 2025 17:43:10 +0000 (0:00:02.147) 0:09:43.090 *********** 2025-06-02 17:45:16.049059 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:45:16.049063 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:45:16.049068 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:45:16.049073 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049077 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049082 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049086 | orchestrator | 2025-06-02 17:45:16.049091 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-02 17:45:16.049095 | orchestrator | 2025-06-02 17:45:16.049100 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.049104 | orchestrator | Monday 02 June 2025 17:43:11 +0000 (0:00:01.071) 0:09:44.162 *********** 2025-06-02 17:45:16.049109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.049118 | orchestrator | 2025-06-02 17:45:16.049123 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.049127 | orchestrator | Monday 02 June 2025 17:43:11 +0000 (0:00:00.580) 0:09:44.743 *********** 2025-06-02 17:45:16.049160 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.049165 | orchestrator | 2025-06-02 17:45:16.049169 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.049174 | orchestrator | Monday 02 June 2025 17:43:12 +0000 (0:00:00.811) 0:09:45.555 *********** 2025-06-02 17:45:16.049178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049183 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049187 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049192 | orchestrator | 2025-06-02 17:45:16.049196 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.049201 | orchestrator | Monday 02 June 2025 17:43:12 +0000 (0:00:00.322) 0:09:45.878 *********** 2025-06-02 17:45:16.049205 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049210 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049214 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049219 | orchestrator | 2025-06-02 17:45:16.049223 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.049227 | orchestrator | Monday 02 June 2025 17:43:13 +0000 (0:00:00.800) 0:09:46.678 *********** 2025-06-02 17:45:16.049232 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049236 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049241 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049245 | orchestrator | 2025-06-02 17:45:16.049250 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.049254 | orchestrator | Monday 02 June 2025 17:43:14 +0000 (0:00:01.036) 0:09:47.715 *********** 2025-06-02 17:45:16.049259 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049263 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049268 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049272 | orchestrator | 2025-06-02 17:45:16.049277 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.049281 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.747) 0:09:48.462 *********** 2025-06-02 17:45:16.049286 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049290 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049295 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049299 | orchestrator | 2025-06-02 17:45:16.049304 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.049308 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.339) 0:09:48.802 *********** 2025-06-02 17:45:16.049313 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049317 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049322 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049326 | orchestrator | 2025-06-02 17:45:16.049330 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.049335 | orchestrator | Monday 02 June 2025 17:43:16 +0000 (0:00:00.294) 0:09:49.096 *********** 2025-06-02 17:45:16.049339 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049344 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049348 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049353 | orchestrator | 2025-06-02 17:45:16.049357 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.049364 | orchestrator | Monday 02 June 2025 17:43:16 +0000 (0:00:00.469) 0:09:49.565 *********** 2025-06-02 17:45:16.049369 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049374 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049378 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049383 | orchestrator | 2025-06-02 17:45:16.049391 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.049395 | orchestrator | Monday 02 June 2025 17:43:17 +0000 (0:00:00.701) 0:09:50.267 *********** 2025-06-02 17:45:16.049400 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049404 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049408 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049413 | orchestrator | 2025-06-02 17:45:16.049417 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.049422 | orchestrator | Monday 02 June 2025 17:43:17 +0000 (0:00:00.671) 0:09:50.938 *********** 2025-06-02 17:45:16.049426 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049431 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049435 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049439 | orchestrator | 2025-06-02 17:45:16.049444 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.049448 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:00.292) 0:09:51.231 *********** 2025-06-02 17:45:16.049453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049457 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049462 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049466 | orchestrator | 2025-06-02 17:45:16.049471 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.049475 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:00.468) 0:09:51.699 *********** 2025-06-02 17:45:16.049483 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049488 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049492 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049497 | orchestrator | 2025-06-02 17:45:16.049501 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.049506 | orchestrator | Monday 02 June 2025 17:43:19 +0000 (0:00:00.357) 0:09:52.057 *********** 2025-06-02 17:45:16.049510 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049515 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049519 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049523 | orchestrator | 2025-06-02 17:45:16.049528 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.049533 | orchestrator | Monday 02 June 2025 17:43:19 +0000 (0:00:00.429) 0:09:52.486 *********** 2025-06-02 17:45:16.049537 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049541 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049546 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049550 | orchestrator | 2025-06-02 17:45:16.049555 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.049559 | orchestrator | Monday 02 June 2025 17:43:19 +0000 (0:00:00.344) 0:09:52.831 *********** 2025-06-02 17:45:16.049564 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049568 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049573 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049577 | orchestrator | 2025-06-02 17:45:16.049582 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.049586 | orchestrator | Monday 02 June 2025 17:43:20 +0000 (0:00:00.632) 0:09:53.463 *********** 2025-06-02 17:45:16.049591 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049595 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049600 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049604 | orchestrator | 2025-06-02 17:45:16.049609 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.049613 | orchestrator | Monday 02 June 2025 17:43:20 +0000 (0:00:00.344) 0:09:53.808 *********** 2025-06-02 17:45:16.049618 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049622 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049627 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049631 | orchestrator | 2025-06-02 17:45:16.049635 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.049644 | orchestrator | Monday 02 June 2025 17:43:21 +0000 (0:00:00.346) 0:09:54.155 *********** 2025-06-02 17:45:16.049648 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049653 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049657 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049662 | orchestrator | 2025-06-02 17:45:16.049666 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.049670 | orchestrator | Monday 02 June 2025 17:43:21 +0000 (0:00:00.323) 0:09:54.478 *********** 2025-06-02 17:45:16.049675 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.049679 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.049684 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.049688 | orchestrator | 2025-06-02 17:45:16.049693 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-02 17:45:16.049697 | orchestrator | Monday 02 June 2025 17:43:22 +0000 (0:00:00.885) 0:09:55.363 *********** 2025-06-02 17:45:16.049702 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.049706 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.049711 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-02 17:45:16.049715 | orchestrator | 2025-06-02 17:45:16.049720 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-02 17:45:16.049724 | orchestrator | Monday 02 June 2025 17:43:22 +0000 (0:00:00.436) 0:09:55.800 *********** 2025-06-02 17:45:16.049729 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.049733 | orchestrator | 2025-06-02 17:45:16.049738 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-02 17:45:16.049742 | orchestrator | Monday 02 June 2025 17:43:24 +0000 (0:00:02.200) 0:09:58.000 *********** 2025-06-02 17:45:16.049749 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-02 17:45:16.049756 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.049760 | orchestrator | 2025-06-02 17:45:16.049767 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-02 17:45:16.049772 | orchestrator | Monday 02 June 2025 17:43:25 +0000 (0:00:00.240) 0:09:58.241 *********** 2025-06-02 17:45:16.049779 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:45:16.049793 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:45:16.049800 | orchestrator | 2025-06-02 17:45:16.049812 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-02 17:45:16.049822 | orchestrator | Monday 02 June 2025 17:43:34 +0000 (0:00:08.880) 0:10:07.122 *********** 2025-06-02 17:45:16.049829 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:45:16.049835 | orchestrator | 2025-06-02 17:45:16.049842 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-02 17:45:16.049849 | orchestrator | Monday 02 June 2025 17:43:37 +0000 (0:00:03.660) 0:10:10.783 *********** 2025-06-02 17:45:16.049861 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.049868 | orchestrator | 2025-06-02 17:45:16.049874 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-02 17:45:16.049881 | orchestrator | Monday 02 June 2025 17:43:38 +0000 (0:00:00.633) 0:10:11.417 *********** 2025-06-02 17:45:16.049889 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:45:16.049939 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:45:16.049948 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-02 17:45:16.049956 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-02 17:45:16.049963 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-02 17:45:16.049970 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-02 17:45:16.049978 | orchestrator | 2025-06-02 17:45:16.049983 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-02 17:45:16.049988 | orchestrator | Monday 02 June 2025 17:43:39 +0000 (0:00:01.209) 0:10:12.626 *********** 2025-06-02 17:45:16.049992 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.049997 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.050001 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.050006 | orchestrator | 2025-06-02 17:45:16.050010 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:45:16.050054 | orchestrator | Monday 02 June 2025 17:43:42 +0000 (0:00:03.215) 0:10:15.842 *********** 2025-06-02 17:45:16.050059 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:45:16.050064 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.050068 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050073 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:45:16.050077 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:45:16.050082 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050086 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:45:16.050091 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:45:16.050095 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050100 | orchestrator | 2025-06-02 17:45:16.050104 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-02 17:45:16.050109 | orchestrator | Monday 02 June 2025 17:43:44 +0000 (0:00:01.934) 0:10:17.776 *********** 2025-06-02 17:45:16.050113 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050118 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050122 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050127 | orchestrator | 2025-06-02 17:45:16.050131 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-02 17:45:16.050135 | orchestrator | Monday 02 June 2025 17:43:47 +0000 (0:00:02.791) 0:10:20.568 *********** 2025-06-02 17:45:16.050140 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050145 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050149 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050153 | orchestrator | 2025-06-02 17:45:16.050158 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-02 17:45:16.050162 | orchestrator | Monday 02 June 2025 17:43:47 +0000 (0:00:00.402) 0:10:20.970 *********** 2025-06-02 17:45:16.050167 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.050171 | orchestrator | 2025-06-02 17:45:16.050175 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-02 17:45:16.050179 | orchestrator | Monday 02 June 2025 17:43:48 +0000 (0:00:00.860) 0:10:21.831 *********** 2025-06-02 17:45:16.050183 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.050187 | orchestrator | 2025-06-02 17:45:16.050191 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-02 17:45:16.050195 | orchestrator | Monday 02 June 2025 17:43:49 +0000 (0:00:00.655) 0:10:22.487 *********** 2025-06-02 17:45:16.050202 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050207 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050217 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050221 | orchestrator | 2025-06-02 17:45:16.050225 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-02 17:45:16.050229 | orchestrator | Monday 02 June 2025 17:43:50 +0000 (0:00:01.307) 0:10:23.794 *********** 2025-06-02 17:45:16.050233 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050237 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050241 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050245 | orchestrator | 2025-06-02 17:45:16.050249 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-02 17:45:16.050253 | orchestrator | Monday 02 June 2025 17:43:52 +0000 (0:00:01.561) 0:10:25.356 *********** 2025-06-02 17:45:16.050257 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050261 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050265 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050269 | orchestrator | 2025-06-02 17:45:16.050274 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-02 17:45:16.050278 | orchestrator | Monday 02 June 2025 17:43:54 +0000 (0:00:01.913) 0:10:27.269 *********** 2025-06-02 17:45:16.050282 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050286 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050290 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050294 | orchestrator | 2025-06-02 17:45:16.050298 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-02 17:45:16.050302 | orchestrator | Monday 02 June 2025 17:43:56 +0000 (0:00:02.026) 0:10:29.295 *********** 2025-06-02 17:45:16.050306 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050315 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050319 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050323 | orchestrator | 2025-06-02 17:45:16.050327 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.050331 | orchestrator | Monday 02 June 2025 17:43:57 +0000 (0:00:01.437) 0:10:30.733 *********** 2025-06-02 17:45:16.050335 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050339 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050343 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050347 | orchestrator | 2025-06-02 17:45:16.050351 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-02 17:45:16.050355 | orchestrator | Monday 02 June 2025 17:43:58 +0000 (0:00:00.670) 0:10:31.404 *********** 2025-06-02 17:45:16.050360 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.050364 | orchestrator | 2025-06-02 17:45:16.050368 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-02 17:45:16.050372 | orchestrator | Monday 02 June 2025 17:43:59 +0000 (0:00:00.786) 0:10:32.191 *********** 2025-06-02 17:45:16.050376 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050380 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050384 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050388 | orchestrator | 2025-06-02 17:45:16.050392 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-02 17:45:16.050396 | orchestrator | Monday 02 June 2025 17:43:59 +0000 (0:00:00.363) 0:10:32.554 *********** 2025-06-02 17:45:16.050400 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.050405 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.050409 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.050413 | orchestrator | 2025-06-02 17:45:16.050417 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-02 17:45:16.050421 | orchestrator | Monday 02 June 2025 17:44:00 +0000 (0:00:01.232) 0:10:33.786 *********** 2025-06-02 17:45:16.050425 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.050429 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.050433 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.050440 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050444 | orchestrator | 2025-06-02 17:45:16.050448 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-02 17:45:16.050452 | orchestrator | Monday 02 June 2025 17:44:01 +0000 (0:00:00.874) 0:10:34.661 *********** 2025-06-02 17:45:16.050456 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050497 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050502 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050506 | orchestrator | 2025-06-02 17:45:16.050510 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 17:45:16.050514 | orchestrator | 2025-06-02 17:45:16.050518 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-02 17:45:16.050522 | orchestrator | Monday 02 June 2025 17:44:02 +0000 (0:00:00.809) 0:10:35.470 *********** 2025-06-02 17:45:16.050526 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.050530 | orchestrator | 2025-06-02 17:45:16.050534 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-02 17:45:16.050538 | orchestrator | Monday 02 June 2025 17:44:02 +0000 (0:00:00.527) 0:10:35.998 *********** 2025-06-02 17:45:16.050543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.050547 | orchestrator | 2025-06-02 17:45:16.050551 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-02 17:45:16.050555 | orchestrator | Monday 02 June 2025 17:44:03 +0000 (0:00:00.849) 0:10:36.848 *********** 2025-06-02 17:45:16.050559 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050563 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050567 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050571 | orchestrator | 2025-06-02 17:45:16.050575 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-02 17:45:16.050579 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.321) 0:10:37.169 *********** 2025-06-02 17:45:16.050583 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050587 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050594 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050598 | orchestrator | 2025-06-02 17:45:16.050602 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-02 17:45:16.050607 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.696) 0:10:37.866 *********** 2025-06-02 17:45:16.050611 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050615 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050619 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050623 | orchestrator | 2025-06-02 17:45:16.050627 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-02 17:45:16.050631 | orchestrator | Monday 02 June 2025 17:44:05 +0000 (0:00:00.736) 0:10:38.602 *********** 2025-06-02 17:45:16.050635 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050639 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050643 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050647 | orchestrator | 2025-06-02 17:45:16.050651 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-02 17:45:16.050655 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:01.073) 0:10:39.676 *********** 2025-06-02 17:45:16.050659 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050663 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050667 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050672 | orchestrator | 2025-06-02 17:45:16.050676 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-02 17:45:16.050680 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:00.326) 0:10:40.003 *********** 2025-06-02 17:45:16.050684 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050688 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050696 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050700 | orchestrator | 2025-06-02 17:45:16.050708 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-02 17:45:16.050712 | orchestrator | Monday 02 June 2025 17:44:07 +0000 (0:00:00.338) 0:10:40.342 *********** 2025-06-02 17:45:16.050716 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050720 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050724 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050728 | orchestrator | 2025-06-02 17:45:16.050732 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-02 17:45:16.050736 | orchestrator | Monday 02 June 2025 17:44:07 +0000 (0:00:00.300) 0:10:40.643 *********** 2025-06-02 17:45:16.050740 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050745 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050749 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050755 | orchestrator | 2025-06-02 17:45:16.050761 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-02 17:45:16.050771 | orchestrator | Monday 02 June 2025 17:44:08 +0000 (0:00:01.057) 0:10:41.701 *********** 2025-06-02 17:45:16.050779 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050786 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050792 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050798 | orchestrator | 2025-06-02 17:45:16.050804 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-02 17:45:16.050810 | orchestrator | Monday 02 June 2025 17:44:09 +0000 (0:00:00.739) 0:10:42.440 *********** 2025-06-02 17:45:16.050816 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050822 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050828 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050834 | orchestrator | 2025-06-02 17:45:16.050840 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-02 17:45:16.050847 | orchestrator | Monday 02 June 2025 17:44:09 +0000 (0:00:00.299) 0:10:42.740 *********** 2025-06-02 17:45:16.050853 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.050860 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.050867 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.050873 | orchestrator | 2025-06-02 17:45:16.050880 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-02 17:45:16.050887 | orchestrator | Monday 02 June 2025 17:44:10 +0000 (0:00:00.308) 0:10:43.048 *********** 2025-06-02 17:45:16.050908 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050915 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050921 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050928 | orchestrator | 2025-06-02 17:45:16.050934 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-02 17:45:16.050941 | orchestrator | Monday 02 June 2025 17:44:10 +0000 (0:00:00.644) 0:10:43.692 *********** 2025-06-02 17:45:16.050948 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050954 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050960 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.050966 | orchestrator | 2025-06-02 17:45:16.050973 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-02 17:45:16.050980 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.378) 0:10:44.071 *********** 2025-06-02 17:45:16.050987 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.050993 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.050999 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.051006 | orchestrator | 2025-06-02 17:45:16.051012 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-02 17:45:16.051018 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.331) 0:10:44.403 *********** 2025-06-02 17:45:16.051025 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051031 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051037 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051043 | orchestrator | 2025-06-02 17:45:16.051050 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-02 17:45:16.051062 | orchestrator | Monday 02 June 2025 17:44:11 +0000 (0:00:00.342) 0:10:44.745 *********** 2025-06-02 17:45:16.051068 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051075 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051081 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051087 | orchestrator | 2025-06-02 17:45:16.051093 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-02 17:45:16.051099 | orchestrator | Monday 02 June 2025 17:44:12 +0000 (0:00:00.621) 0:10:45.366 *********** 2025-06-02 17:45:16.051106 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051112 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051118 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051124 | orchestrator | 2025-06-02 17:45:16.051134 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-02 17:45:16.051141 | orchestrator | Monday 02 June 2025 17:44:12 +0000 (0:00:00.308) 0:10:45.675 *********** 2025-06-02 17:45:16.051147 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.051153 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.051160 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.051166 | orchestrator | 2025-06-02 17:45:16.051172 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-02 17:45:16.051178 | orchestrator | Monday 02 June 2025 17:44:12 +0000 (0:00:00.312) 0:10:45.988 *********** 2025-06-02 17:45:16.051185 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.051191 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.051197 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.051203 | orchestrator | 2025-06-02 17:45:16.051210 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-02 17:45:16.051216 | orchestrator | Monday 02 June 2025 17:44:13 +0000 (0:00:00.785) 0:10:46.773 *********** 2025-06-02 17:45:16.051222 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.051229 | orchestrator | 2025-06-02 17:45:16.051235 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 17:45:16.051241 | orchestrator | Monday 02 June 2025 17:44:14 +0000 (0:00:00.693) 0:10:47.466 *********** 2025-06-02 17:45:16.051247 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051254 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.051260 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.051266 | orchestrator | 2025-06-02 17:45:16.051277 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:45:16.051284 | orchestrator | Monday 02 June 2025 17:44:16 +0000 (0:00:02.309) 0:10:49.776 *********** 2025-06-02 17:45:16.051291 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:45:16.051297 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-02 17:45:16.051303 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.051309 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:45:16.051316 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-02 17:45:16.051322 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.051328 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:45:16.051334 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-02 17:45:16.051341 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.051347 | orchestrator | 2025-06-02 17:45:16.051353 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-02 17:45:16.051359 | orchestrator | Monday 02 June 2025 17:44:18 +0000 (0:00:01.605) 0:10:51.382 *********** 2025-06-02 17:45:16.051365 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051372 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051378 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051384 | orchestrator | 2025-06-02 17:45:16.051390 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-02 17:45:16.051401 | orchestrator | Monday 02 June 2025 17:44:18 +0000 (0:00:00.329) 0:10:51.712 *********** 2025-06-02 17:45:16.051408 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.051414 | orchestrator | 2025-06-02 17:45:16.051421 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-02 17:45:16.051427 | orchestrator | Monday 02 June 2025 17:44:19 +0000 (0:00:00.578) 0:10:52.291 *********** 2025-06-02 17:45:16.051433 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.051441 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.051448 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.051455 | orchestrator | 2025-06-02 17:45:16.051463 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-02 17:45:16.051469 | orchestrator | Monday 02 June 2025 17:44:20 +0000 (0:00:01.352) 0:10:53.643 *********** 2025-06-02 17:45:16.051475 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051479 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:45:16.051483 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051487 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051491 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:45:16.051496 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-02 17:45:16.051500 | orchestrator | 2025-06-02 17:45:16.051504 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-02 17:45:16.051508 | orchestrator | Monday 02 June 2025 17:44:25 +0000 (0:00:04.473) 0:10:58.117 *********** 2025-06-02 17:45:16.051512 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051516 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.051523 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051527 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.051531 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:45:16.051535 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:45:16.051539 | orchestrator | 2025-06-02 17:45:16.051543 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-02 17:45:16.051547 | orchestrator | Monday 02 June 2025 17:44:27 +0000 (0:00:02.434) 0:11:00.552 *********** 2025-06-02 17:45:16.051551 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:45:16.051555 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.051559 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:45:16.051563 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.051568 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:45:16.051572 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.051576 | orchestrator | 2025-06-02 17:45:16.051580 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-02 17:45:16.051584 | orchestrator | Monday 02 June 2025 17:44:28 +0000 (0:00:01.268) 0:11:01.820 *********** 2025-06-02 17:45:16.051592 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-02 17:45:16.051596 | orchestrator | 2025-06-02 17:45:16.051600 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-02 17:45:16.051607 | orchestrator | Monday 02 June 2025 17:44:29 +0000 (0:00:00.220) 0:11:02.040 *********** 2025-06-02 17:45:16.051612 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051616 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051632 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051636 | orchestrator | 2025-06-02 17:45:16.051640 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-02 17:45:16.051644 | orchestrator | Monday 02 June 2025 17:44:30 +0000 (0:00:01.192) 0:11:03.233 *********** 2025-06-02 17:45:16.051648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051657 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051665 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-02 17:45:16.051669 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051673 | orchestrator | 2025-06-02 17:45:16.051677 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-02 17:45:16.051681 | orchestrator | Monday 02 June 2025 17:44:30 +0000 (0:00:00.586) 0:11:03.819 *********** 2025-06-02 17:45:16.051685 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:45:16.051689 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:45:16.051693 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:45:16.051697 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:45:16.051701 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-02 17:45:16.051705 | orchestrator | 2025-06-02 17:45:16.051709 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-02 17:45:16.051713 | orchestrator | Monday 02 June 2025 17:45:01 +0000 (0:00:31.145) 0:11:34.965 *********** 2025-06-02 17:45:16.051717 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051721 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051725 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051733 | orchestrator | 2025-06-02 17:45:16.051740 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-02 17:45:16.051744 | orchestrator | Monday 02 June 2025 17:45:02 +0000 (0:00:00.334) 0:11:35.300 *********** 2025-06-02 17:45:16.051748 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.051752 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.051756 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.051760 | orchestrator | 2025-06-02 17:45:16.051764 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-02 17:45:16.051768 | orchestrator | Monday 02 June 2025 17:45:02 +0000 (0:00:00.308) 0:11:35.608 *********** 2025-06-02 17:45:16.051772 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.051776 | orchestrator | 2025-06-02 17:45:16.051780 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-02 17:45:16.051784 | orchestrator | Monday 02 June 2025 17:45:03 +0000 (0:00:00.821) 0:11:36.430 *********** 2025-06-02 17:45:16.051788 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.051792 | orchestrator | 2025-06-02 17:45:16.051796 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-02 17:45:16.051800 | orchestrator | Monday 02 June 2025 17:45:03 +0000 (0:00:00.548) 0:11:36.978 *********** 2025-06-02 17:45:16.051804 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.051808 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.051812 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.051816 | orchestrator | 2025-06-02 17:45:16.051823 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-02 17:45:16.051827 | orchestrator | Monday 02 June 2025 17:45:05 +0000 (0:00:01.301) 0:11:38.280 *********** 2025-06-02 17:45:16.051831 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.051835 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.051839 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.051843 | orchestrator | 2025-06-02 17:45:16.051847 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-02 17:45:16.051851 | orchestrator | Monday 02 June 2025 17:45:06 +0000 (0:00:01.485) 0:11:39.766 *********** 2025-06-02 17:45:16.051856 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:45:16.051860 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:45:16.051864 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:45:16.051868 | orchestrator | 2025-06-02 17:45:16.051872 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-02 17:45:16.051876 | orchestrator | Monday 02 June 2025 17:45:08 +0000 (0:00:01.834) 0:11:41.601 *********** 2025-06-02 17:45:16.051880 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.051884 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.051888 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-02 17:45:16.052025 | orchestrator | 2025-06-02 17:45:16.052049 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-02 17:45:16.052053 | orchestrator | Monday 02 June 2025 17:45:11 +0000 (0:00:02.787) 0:11:44.388 *********** 2025-06-02 17:45:16.052057 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.052062 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.052066 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.052070 | orchestrator | 2025-06-02 17:45:16.052074 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-02 17:45:16.052078 | orchestrator | Monday 02 June 2025 17:45:11 +0000 (0:00:00.372) 0:11:44.761 *********** 2025-06-02 17:45:16.052082 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:45:16.052092 | orchestrator | 2025-06-02 17:45:16.052096 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-02 17:45:16.052100 | orchestrator | Monday 02 June 2025 17:45:12 +0000 (0:00:00.527) 0:11:45.288 *********** 2025-06-02 17:45:16.052104 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.052108 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.052112 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.052116 | orchestrator | 2025-06-02 17:45:16.052120 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-02 17:45:16.052124 | orchestrator | Monday 02 June 2025 17:45:12 +0000 (0:00:00.623) 0:11:45.912 *********** 2025-06-02 17:45:16.052128 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.052132 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:45:16.052136 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:45:16.052140 | orchestrator | 2025-06-02 17:45:16.052145 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-02 17:45:16.052149 | orchestrator | Monday 02 June 2025 17:45:13 +0000 (0:00:00.379) 0:11:46.291 *********** 2025-06-02 17:45:16.052153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:45:16.052157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:45:16.052161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:45:16.052165 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:45:16.052169 | orchestrator | 2025-06-02 17:45:16.052173 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-02 17:45:16.052177 | orchestrator | Monday 02 June 2025 17:45:13 +0000 (0:00:00.656) 0:11:46.948 *********** 2025-06-02 17:45:16.052181 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:45:16.052185 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:45:16.052189 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:45:16.052193 | orchestrator | 2025-06-02 17:45:16.052197 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:45:16.052205 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-06-02 17:45:16.052210 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-02 17:45:16.052215 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-02 17:45:16.052219 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-06-02 17:45:16.052223 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-02 17:45:16.052227 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-02 17:45:16.052231 | orchestrator | 2025-06-02 17:45:16.052235 | orchestrator | 2025-06-02 17:45:16.052239 | orchestrator | 2025-06-02 17:45:16.052243 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:45:16.052247 | orchestrator | Monday 02 June 2025 17:45:14 +0000 (0:00:00.272) 0:11:47.221 *********** 2025-06-02 17:45:16.052258 | orchestrator | =============================================================================== 2025-06-02 17:45:16.052263 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 57.17s 2025-06-02 17:45:16.052267 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 41.01s 2025-06-02 17:45:16.052271 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.15s 2025-06-02 17:45:16.052275 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.60s 2025-06-02 17:45:16.052282 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.96s 2025-06-02 17:45:16.052286 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 13.87s 2025-06-02 17:45:16.052290 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.97s 2025-06-02 17:45:16.052294 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.53s 2025-06-02 17:45:16.052298 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.39s 2025-06-02 17:45:16.052302 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.88s 2025-06-02 17:45:16.052306 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.93s 2025-06-02 17:45:16.052310 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.71s 2025-06-02 17:45:16.052314 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.84s 2025-06-02 17:45:16.052318 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.66s 2025-06-02 17:45:16.052322 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.47s 2025-06-02 17:45:16.052326 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.08s 2025-06-02 17:45:16.052330 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 4.01s 2025-06-02 17:45:16.052334 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.84s 2025-06-02 17:45:16.052338 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.69s 2025-06-02 17:45:16.052342 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.66s 2025-06-02 17:45:16.052346 | orchestrator | 2025-06-02 17:45:16 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:16.052351 | orchestrator | 2025-06-02 17:45:16 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:16.052355 | orchestrator | 2025-06-02 17:45:16 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:16.052359 | orchestrator | 2025-06-02 17:45:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:19.081814 | orchestrator | 2025-06-02 17:45:19 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:19.082285 | orchestrator | 2025-06-02 17:45:19 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:19.084825 | orchestrator | 2025-06-02 17:45:19 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:19.084923 | orchestrator | 2025-06-02 17:45:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:22.137990 | orchestrator | 2025-06-02 17:45:22 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:22.140798 | orchestrator | 2025-06-02 17:45:22 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:22.144107 | orchestrator | 2025-06-02 17:45:22 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:22.144481 | orchestrator | 2025-06-02 17:45:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:25.195971 | orchestrator | 2025-06-02 17:45:25 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:25.198465 | orchestrator | 2025-06-02 17:45:25 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:25.201478 | orchestrator | 2025-06-02 17:45:25 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:25.202219 | orchestrator | 2025-06-02 17:45:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:28.248956 | orchestrator | 2025-06-02 17:45:28 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:28.250270 | orchestrator | 2025-06-02 17:45:28 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:28.255229 | orchestrator | 2025-06-02 17:45:28 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:28.255281 | orchestrator | 2025-06-02 17:45:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:31.307347 | orchestrator | 2025-06-02 17:45:31 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:31.309309 | orchestrator | 2025-06-02 17:45:31 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:31.313634 | orchestrator | 2025-06-02 17:45:31 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:31.313942 | orchestrator | 2025-06-02 17:45:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:34.353906 | orchestrator | 2025-06-02 17:45:34 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:34.356909 | orchestrator | 2025-06-02 17:45:34 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:34.360371 | orchestrator | 2025-06-02 17:45:34 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:34.360432 | orchestrator | 2025-06-02 17:45:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:37.402174 | orchestrator | 2025-06-02 17:45:37 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:37.403279 | orchestrator | 2025-06-02 17:45:37 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:37.404931 | orchestrator | 2025-06-02 17:45:37 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:37.404978 | orchestrator | 2025-06-02 17:45:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:40.456588 | orchestrator | 2025-06-02 17:45:40 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:40.456681 | orchestrator | 2025-06-02 17:45:40 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:40.460181 | orchestrator | 2025-06-02 17:45:40 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:40.460258 | orchestrator | 2025-06-02 17:45:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:43.509460 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:43.511490 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:43.514557 | orchestrator | 2025-06-02 17:45:43 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:43.514626 | orchestrator | 2025-06-02 17:45:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:46.558306 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:46.559626 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:46.562233 | orchestrator | 2025-06-02 17:45:46 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:46.562307 | orchestrator | 2025-06-02 17:45:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:49.614505 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:49.616124 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:49.617422 | orchestrator | 2025-06-02 17:45:49 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:49.617477 | orchestrator | 2025-06-02 17:45:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:52.666320 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:52.668567 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:52.671027 | orchestrator | 2025-06-02 17:45:52 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:52.671077 | orchestrator | 2025-06-02 17:45:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:55.721085 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:55.723226 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:55.725702 | orchestrator | 2025-06-02 17:45:55 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:55.725762 | orchestrator | 2025-06-02 17:45:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:45:58.775477 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:45:58.776663 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:45:58.779181 | orchestrator | 2025-06-02 17:45:58 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:45:58.779260 | orchestrator | 2025-06-02 17:45:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:01.833744 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:01.836460 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:46:01.839017 | orchestrator | 2025-06-02 17:46:01 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:01.839062 | orchestrator | 2025-06-02 17:46:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:04.887803 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:04.889947 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state STARTED 2025-06-02 17:46:04.892299 | orchestrator | 2025-06-02 17:46:04 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:04.892354 | orchestrator | 2025-06-02 17:46:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:07.941581 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:07.943917 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task 4fa99543-1511-41ee-8c59-79a3d0676435 is in state SUCCESS 2025-06-02 17:46:07.945653 | orchestrator | 2025-06-02 17:46:07.945701 | orchestrator | 2025-06-02 17:46:07.945709 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:46:07.945717 | orchestrator | 2025-06-02 17:46:07.945723 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:46:07.945731 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.280) 0:00:00.280 *********** 2025-06-02 17:46:07.945758 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:07.945766 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:07.945772 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:07.945779 | orchestrator | 2025-06-02 17:46:07.945785 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:46:07.945894 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.306) 0:00:00.587 *********** 2025-06-02 17:46:07.945903 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-02 17:46:07.945908 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-02 17:46:07.945911 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-02 17:46:07.945915 | orchestrator | 2025-06-02 17:46:07.945919 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-02 17:46:07.945923 | orchestrator | 2025-06-02 17:46:07.945926 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:46:07.945930 | orchestrator | Monday 02 June 2025 17:43:16 +0000 (0:00:00.453) 0:00:01.040 *********** 2025-06-02 17:46:07.945934 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:07.945938 | orchestrator | 2025-06-02 17:46:07.945942 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-02 17:46:07.945945 | orchestrator | Monday 02 June 2025 17:43:16 +0000 (0:00:00.444) 0:00:01.485 *********** 2025-06-02 17:46:07.945949 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:46:07.945953 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:46:07.945957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-02 17:46:07.945961 | orchestrator | 2025-06-02 17:46:07.945976 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-02 17:46:07.945980 | orchestrator | Monday 02 June 2025 17:43:17 +0000 (0:00:00.676) 0:00:02.162 *********** 2025-06-02 17:46:07.945986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.945992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946093 | orchestrator | 2025-06-02 17:46:07.946097 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:46:07.946100 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:01.454) 0:00:03.617 *********** 2025-06-02 17:46:07.946108 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:07.946112 | orchestrator | 2025-06-02 17:46:07.946116 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-02 17:46:07.946120 | orchestrator | Monday 02 June 2025 17:43:19 +0000 (0:00:00.531) 0:00:04.149 *********** 2025-06-02 17:46:07.946131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946173 | orchestrator | 2025-06-02 17:46:07.946179 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-02 17:46:07.946184 | orchestrator | Monday 02 June 2025 17:43:22 +0000 (0:00:03.258) 0:00:07.407 *********** 2025-06-02 17:46:07.946193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946210 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:07.946217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946237 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:07.946247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946259 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:07.946263 | orchestrator | 2025-06-02 17:46:07.946267 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-02 17:46:07.946271 | orchestrator | Monday 02 June 2025 17:43:24 +0000 (0:00:01.390) 0:00:08.797 *********** 2025-06-02 17:46:07.946358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:07.946391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946404 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:07.946408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-02 17:46:07.946418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-02 17:46:07.946422 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:07.946425 | orchestrator | 2025-06-02 17:46:07.946429 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-02 17:46:07.946433 | orchestrator | Monday 02 June 2025 17:43:24 +0000 (0:00:00.919) 0:00:09.717 *********** 2025-06-02 17:46:07.946439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946500 | orchestrator | 2025-06-02 17:46:07.946506 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-02 17:46:07.946512 | orchestrator | Monday 02 June 2025 17:43:27 +0000 (0:00:02.459) 0:00:12.176 *********** 2025-06-02 17:46:07.946518 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.946523 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:07.946528 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:07.946535 | orchestrator | 2025-06-02 17:46:07.946541 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-02 17:46:07.946547 | orchestrator | Monday 02 June 2025 17:43:30 +0000 (0:00:03.181) 0:00:15.357 *********** 2025-06-02 17:46:07.946552 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:07.946558 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.946564 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:07.946570 | orchestrator | 2025-06-02 17:46:07.946575 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-02 17:46:07.946581 | orchestrator | Monday 02 June 2025 17:43:32 +0000 (0:00:01.535) 0:00:16.893 *********** 2025-06-02 17:46:07.946587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-02 17:46:07.946615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-02 17:46:07.946648 | orchestrator | 2025-06-02 17:46:07.946654 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:46:07.946660 | orchestrator | Monday 02 June 2025 17:43:34 +0000 (0:00:02.107) 0:00:19.000 *********** 2025-06-02 17:46:07.946666 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:07.946672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:07.946679 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:07.946685 | orchestrator | 2025-06-02 17:46:07.946691 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:46:07.946698 | orchestrator | Monday 02 June 2025 17:43:34 +0000 (0:00:00.336) 0:00:19.337 *********** 2025-06-02 17:46:07.946704 | orchestrator | 2025-06-02 17:46:07.946711 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:46:07.946716 | orchestrator | Monday 02 June 2025 17:43:34 +0000 (0:00:00.063) 0:00:19.401 *********** 2025-06-02 17:46:07.946724 | orchestrator | 2025-06-02 17:46:07.946728 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-02 17:46:07.946732 | orchestrator | Monday 02 June 2025 17:43:34 +0000 (0:00:00.063) 0:00:19.464 *********** 2025-06-02 17:46:07.946736 | orchestrator | 2025-06-02 17:46:07.946739 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-02 17:46:07.946746 | orchestrator | Monday 02 June 2025 17:43:35 +0000 (0:00:00.279) 0:00:19.743 *********** 2025-06-02 17:46:07.946752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:07.946758 | orchestrator | 2025-06-02 17:46:07.946764 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-02 17:46:07.946771 | orchestrator | Monday 02 June 2025 17:43:35 +0000 (0:00:00.221) 0:00:19.965 *********** 2025-06-02 17:46:07.946777 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:07.946783 | orchestrator | 2025-06-02 17:46:07.946789 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-02 17:46:07.946794 | orchestrator | Monday 02 June 2025 17:43:35 +0000 (0:00:00.223) 0:00:20.188 *********** 2025-06-02 17:46:07.946801 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.946842 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:07.946849 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:07.946856 | orchestrator | 2025-06-02 17:46:07.946861 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-02 17:46:07.946867 | orchestrator | Monday 02 June 2025 17:44:34 +0000 (0:00:58.863) 0:01:19.052 *********** 2025-06-02 17:46:07.946873 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.946881 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:07.946885 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:07.946890 | orchestrator | 2025-06-02 17:46:07.946896 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-02 17:46:07.946903 | orchestrator | Monday 02 June 2025 17:45:54 +0000 (0:01:19.852) 0:02:38.905 *********** 2025-06-02 17:46:07.946909 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:07.946916 | orchestrator | 2025-06-02 17:46:07.946922 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-02 17:46:07.946928 | orchestrator | Monday 02 June 2025 17:45:54 +0000 (0:00:00.754) 0:02:39.659 *********** 2025-06-02 17:46:07.946934 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:07.946941 | orchestrator | 2025-06-02 17:46:07.946947 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-02 17:46:07.946954 | orchestrator | Monday 02 June 2025 17:45:57 +0000 (0:00:02.328) 0:02:41.988 *********** 2025-06-02 17:46:07.946960 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:07.946965 | orchestrator | 2025-06-02 17:46:07.946972 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-02 17:46:07.946978 | orchestrator | Monday 02 June 2025 17:45:59 +0000 (0:00:02.375) 0:02:44.363 *********** 2025-06-02 17:46:07.946983 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.946990 | orchestrator | 2025-06-02 17:46:07.946996 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-02 17:46:07.947002 | orchestrator | Monday 02 June 2025 17:46:02 +0000 (0:00:02.699) 0:02:47.063 *********** 2025-06-02 17:46:07.947010 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:07.947014 | orchestrator | 2025-06-02 17:46:07.947019 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:46:07.947024 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:46:07.947031 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:46:07.947035 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 17:46:07.947045 | orchestrator | 2025-06-02 17:46:07.947049 | orchestrator | 2025-06-02 17:46:07.947054 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:46:07.947062 | orchestrator | Monday 02 June 2025 17:46:04 +0000 (0:00:02.380) 0:02:49.443 *********** 2025-06-02 17:46:07.947067 | orchestrator | =============================================================================== 2025-06-02 17:46:07.947071 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.85s 2025-06-02 17:46:07.947075 | orchestrator | opensearch : Restart opensearch container ------------------------------ 58.86s 2025-06-02 17:46:07.947081 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.26s 2025-06-02 17:46:07.947087 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.18s 2025-06-02 17:46:07.947094 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.70s 2025-06-02 17:46:07.947100 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.46s 2025-06-02 17:46:07.947107 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.38s 2025-06-02 17:46:07.947114 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.38s 2025-06-02 17:46:07.947120 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.33s 2025-06-02 17:46:07.947126 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.11s 2025-06-02 17:46:07.947132 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.54s 2025-06-02 17:46:07.947137 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.45s 2025-06-02 17:46:07.947141 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.39s 2025-06-02 17:46:07.947145 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.92s 2025-06-02 17:46:07.947149 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.75s 2025-06-02 17:46:07.947154 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2025-06-02 17:46:07.947164 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-06-02 17:46:07.947171 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-06-02 17:46:07.947177 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.44s 2025-06-02 17:46:07.947183 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.41s 2025-06-02 17:46:07.947189 | orchestrator | 2025-06-02 17:46:07 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:07.947195 | orchestrator | 2025-06-02 17:46:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:10.990137 | orchestrator | 2025-06-02 17:46:10 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:10.991246 | orchestrator | 2025-06-02 17:46:10 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:10.991307 | orchestrator | 2025-06-02 17:46:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:14.037241 | orchestrator | 2025-06-02 17:46:14 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:14.038569 | orchestrator | 2025-06-02 17:46:14 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:14.038619 | orchestrator | 2025-06-02 17:46:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:17.081532 | orchestrator | 2025-06-02 17:46:17 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:17.083993 | orchestrator | 2025-06-02 17:46:17 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:17.084072 | orchestrator | 2025-06-02 17:46:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:20.128248 | orchestrator | 2025-06-02 17:46:20 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:20.129880 | orchestrator | 2025-06-02 17:46:20 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:20.129912 | orchestrator | 2025-06-02 17:46:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:23.175905 | orchestrator | 2025-06-02 17:46:23 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:23.178173 | orchestrator | 2025-06-02 17:46:23 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:23.178222 | orchestrator | 2025-06-02 17:46:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:26.216872 | orchestrator | 2025-06-02 17:46:26 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:26.218307 | orchestrator | 2025-06-02 17:46:26 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:26.218367 | orchestrator | 2025-06-02 17:46:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:29.260232 | orchestrator | 2025-06-02 17:46:29 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state STARTED 2025-06-02 17:46:29.261738 | orchestrator | 2025-06-02 17:46:29 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:29.261947 | orchestrator | 2025-06-02 17:46:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:32.320109 | orchestrator | 2025-06-02 17:46:32 | INFO  | Task b79705b3-f6d8-4308-8faf-077d74224167 is in state SUCCESS 2025-06-02 17:46:32.322069 | orchestrator | 2025-06-02 17:46:32.322154 | orchestrator | 2025-06-02 17:46:32.322169 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-02 17:46:32.322181 | orchestrator | 2025-06-02 17:46:32.322192 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-02 17:46:32.322203 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.165) 0:00:00.165 *********** 2025-06-02 17:46:32.322213 | orchestrator | ok: [localhost] => { 2025-06-02 17:46:32.322486 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-02 17:46:32.322506 | orchestrator | } 2025-06-02 17:46:32.322516 | orchestrator | 2025-06-02 17:46:32.322526 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-02 17:46:32.322536 | orchestrator | Monday 02 June 2025 17:43:15 +0000 (0:00:00.054) 0:00:00.220 *********** 2025-06-02 17:46:32.322547 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-02 17:46:32.322559 | orchestrator | ...ignoring 2025-06-02 17:46:32.322569 | orchestrator | 2025-06-02 17:46:32.322579 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-02 17:46:32.322589 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:02.852) 0:00:03.073 *********** 2025-06-02 17:46:32.322599 | orchestrator | skipping: [localhost] 2025-06-02 17:46:32.322608 | orchestrator | 2025-06-02 17:46:32.322618 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-02 17:46:32.322628 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:00.047) 0:00:03.120 *********** 2025-06-02 17:46:32.322637 | orchestrator | ok: [localhost] 2025-06-02 17:46:32.322647 | orchestrator | 2025-06-02 17:46:32.322677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:46:32.322687 | orchestrator | 2025-06-02 17:46:32.322697 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:46:32.322706 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:00.148) 0:00:03.268 *********** 2025-06-02 17:46:32.322745 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.322755 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.322764 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.322802 | orchestrator | 2025-06-02 17:46:32.322812 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:46:32.322822 | orchestrator | Monday 02 June 2025 17:43:18 +0000 (0:00:00.269) 0:00:03.538 *********** 2025-06-02 17:46:32.322831 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 17:46:32.322841 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 17:46:32.322851 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 17:46:32.322861 | orchestrator | 2025-06-02 17:46:32.322871 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 17:46:32.322880 | orchestrator | 2025-06-02 17:46:32.322890 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 17:46:32.322899 | orchestrator | Monday 02 June 2025 17:43:19 +0000 (0:00:00.847) 0:00:04.385 *********** 2025-06-02 17:46:32.322909 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 17:46:32.322918 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 17:46:32.322928 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 17:46:32.322937 | orchestrator | 2025-06-02 17:46:32.322947 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:46:32.322956 | orchestrator | Monday 02 June 2025 17:43:20 +0000 (0:00:00.404) 0:00:04.790 *********** 2025-06-02 17:46:32.322966 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:32.322977 | orchestrator | 2025-06-02 17:46:32.322987 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-02 17:46:32.322996 | orchestrator | Monday 02 June 2025 17:43:20 +0000 (0:00:00.635) 0:00:05.426 *********** 2025-06-02 17:46:32.323031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323086 | orchestrator | 2025-06-02 17:46:32.323104 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-02 17:46:32.323117 | orchestrator | Monday 02 June 2025 17:43:24 +0000 (0:00:03.395) 0:00:08.822 *********** 2025-06-02 17:46:32.323130 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.323142 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.323153 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.323164 | orchestrator | 2025-06-02 17:46:32.323176 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-02 17:46:32.323187 | orchestrator | Monday 02 June 2025 17:43:24 +0000 (0:00:00.710) 0:00:09.532 *********** 2025-06-02 17:46:32.323308 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.323324 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.323335 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.323346 | orchestrator | 2025-06-02 17:46:32.323358 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-02 17:46:32.323370 | orchestrator | Monday 02 June 2025 17:43:26 +0000 (0:00:01.553) 0:00:11.085 *********** 2025-06-02 17:46:32.323389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.323451 | orchestrator | 2025-06-02 17:46:32.323462 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-02 17:46:32.323474 | orchestrator | Monday 02 June 2025 17:43:30 +0000 (0:00:03.665) 0:00:14.751 *********** 2025-06-02 17:46:32.323486 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.323503 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.323518 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.323533 | orchestrator | 2025-06-02 17:46:32.323549 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-02 17:46:32.323565 | orchestrator | Monday 02 June 2025 17:43:31 +0000 (0:00:01.110) 0:00:15.861 *********** 2025-06-02 17:46:32.323582 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.323598 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:32.323614 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:32.323631 | orchestrator | 2025-06-02 17:46:32.323648 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:46:32.323666 | orchestrator | Monday 02 June 2025 17:43:35 +0000 (0:00:04.134) 0:00:19.996 *********** 2025-06-02 17:46:32.323682 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:32.323698 | orchestrator | 2025-06-02 17:46:32.323712 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-02 17:46:32.323722 | orchestrator | Monday 02 June 2025 17:43:35 +0000 (0:00:00.522) 0:00:20.518 *********** 2025-06-02 17:46:32.323744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.323787 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.323805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.323816 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.323835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.323854 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.323870 | orchestrator | 2025-06-02 17:46:32.323887 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-02 17:46:32.323906 | orchestrator | Monday 02 June 2025 17:43:39 +0000 (0:00:03.434) 0:00:23.952 *********** 2025-06-02 17:46:32.323933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.323948 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.323966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.323985 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.324020 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324030 | orchestrator | 2025-06-02 17:46:32.324039 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-02 17:46:32.324049 | orchestrator | Monday 02 June 2025 17:43:41 +0000 (0:00:02.529) 0:00:26.482 *********** 2025-06-02 17:46:32.324059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.324087 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.324124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.324140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-02 17:46:32.324168 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324179 | orchestrator | 2025-06-02 17:46:32.324189 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-02 17:46:32.324199 | orchestrator | Monday 02 June 2025 17:43:46 +0000 (0:00:04.164) 0:00:30.646 *********** 2025-06-02 17:46:32.324224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.324235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.324261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-02 17:46:32.324273 | orchestrator | 2025-06-02 17:46:32.324283 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-02 17:46:32.324297 | orchestrator | Monday 02 June 2025 17:43:50 +0000 (0:00:04.216) 0:00:34.862 *********** 2025-06-02 17:46:32.324307 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.324317 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:32.324327 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:32.324336 | orchestrator | 2025-06-02 17:46:32.324346 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-02 17:46:32.324355 | orchestrator | Monday 02 June 2025 17:43:51 +0000 (0:00:01.404) 0:00:36.267 *********** 2025-06-02 17:46:32.324365 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.324375 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.324384 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.324394 | orchestrator | 2025-06-02 17:46:32.324403 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-02 17:46:32.324413 | orchestrator | Monday 02 June 2025 17:43:51 +0000 (0:00:00.331) 0:00:36.599 *********** 2025-06-02 17:46:32.324423 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.324433 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.324443 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.324453 | orchestrator | 2025-06-02 17:46:32.324462 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-02 17:46:32.324472 | orchestrator | Monday 02 June 2025 17:43:52 +0000 (0:00:00.304) 0:00:36.903 *********** 2025-06-02 17:46:32.324483 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-02 17:46:32.324499 | orchestrator | ...ignoring 2025-06-02 17:46:32.324509 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-02 17:46:32.324519 | orchestrator | ...ignoring 2025-06-02 17:46:32.324530 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-02 17:46:32.324539 | orchestrator | ...ignoring 2025-06-02 17:46:32.324549 | orchestrator | 2025-06-02 17:46:32.324558 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-02 17:46:32.324568 | orchestrator | Monday 02 June 2025 17:44:03 +0000 (0:00:10.861) 0:00:47.765 *********** 2025-06-02 17:46:32.324578 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.324587 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.324597 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.324606 | orchestrator | 2025-06-02 17:46:32.324616 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-02 17:46:32.324626 | orchestrator | Monday 02 June 2025 17:44:03 +0000 (0:00:00.671) 0:00:48.436 *********** 2025-06-02 17:46:32.324635 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324645 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.324654 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324664 | orchestrator | 2025-06-02 17:46:32.324674 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-02 17:46:32.324683 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.408) 0:00:48.845 *********** 2025-06-02 17:46:32.324693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324703 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.324712 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324722 | orchestrator | 2025-06-02 17:46:32.324731 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-02 17:46:32.324741 | orchestrator | Monday 02 June 2025 17:44:04 +0000 (0:00:00.429) 0:00:49.274 *********** 2025-06-02 17:46:32.324751 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324760 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.324794 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324813 | orchestrator | 2025-06-02 17:46:32.324823 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-02 17:46:32.324832 | orchestrator | Monday 02 June 2025 17:44:05 +0000 (0:00:00.447) 0:00:49.722 *********** 2025-06-02 17:46:32.324842 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.324859 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.324875 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.324892 | orchestrator | 2025-06-02 17:46:32.324908 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-02 17:46:32.324923 | orchestrator | Monday 02 June 2025 17:44:05 +0000 (0:00:00.701) 0:00:50.423 *********** 2025-06-02 17:46:32.324941 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.324951 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.324961 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.324976 | orchestrator | 2025-06-02 17:46:32.324992 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:46:32.325009 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:00.453) 0:00:50.877 *********** 2025-06-02 17:46:32.325026 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.325043 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.325060 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-02 17:46:32.325077 | orchestrator | 2025-06-02 17:46:32.325093 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-02 17:46:32.325111 | orchestrator | Monday 02 June 2025 17:44:06 +0000 (0:00:00.382) 0:00:51.259 *********** 2025-06-02 17:46:32.325127 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.325147 | orchestrator | 2025-06-02 17:46:32.325157 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-02 17:46:32.325166 | orchestrator | Monday 02 June 2025 17:44:16 +0000 (0:00:10.102) 0:01:01.361 *********** 2025-06-02 17:46:32.325176 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.325185 | orchestrator | 2025-06-02 17:46:32.325195 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 17:46:32.325204 | orchestrator | Monday 02 June 2025 17:44:16 +0000 (0:00:00.128) 0:01:01.489 *********** 2025-06-02 17:46:32.325214 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.325224 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.325234 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.325243 | orchestrator | 2025-06-02 17:46:32.325258 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-02 17:46:32.325269 | orchestrator | Monday 02 June 2025 17:44:18 +0000 (0:00:01.171) 0:01:02.661 *********** 2025-06-02 17:46:32.325279 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.325288 | orchestrator | 2025-06-02 17:46:32.325298 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-02 17:46:32.325308 | orchestrator | Monday 02 June 2025 17:44:26 +0000 (0:00:08.062) 0:01:10.724 *********** 2025-06-02 17:46:32.325317 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.325327 | orchestrator | 2025-06-02 17:46:32.325337 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-02 17:46:32.325346 | orchestrator | Monday 02 June 2025 17:44:27 +0000 (0:00:01.604) 0:01:12.328 *********** 2025-06-02 17:46:32.325356 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.325366 | orchestrator | 2025-06-02 17:46:32.325375 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-02 17:46:32.325385 | orchestrator | Monday 02 June 2025 17:44:30 +0000 (0:00:02.684) 0:01:15.013 *********** 2025-06-02 17:46:32.325394 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.325404 | orchestrator | 2025-06-02 17:46:32.325414 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-02 17:46:32.325424 | orchestrator | Monday 02 June 2025 17:44:30 +0000 (0:00:00.140) 0:01:15.153 *********** 2025-06-02 17:46:32.325434 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.325444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.325453 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.325463 | orchestrator | 2025-06-02 17:46:32.325473 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-02 17:46:32.325482 | orchestrator | Monday 02 June 2025 17:44:31 +0000 (0:00:00.519) 0:01:15.672 *********** 2025-06-02 17:46:32.325492 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.325502 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 17:46:32.325511 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:32.325521 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:32.325530 | orchestrator | 2025-06-02 17:46:32.325540 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 17:46:32.325550 | orchestrator | skipping: no hosts matched 2025-06-02 17:46:32.325560 | orchestrator | 2025-06-02 17:46:32.325569 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 17:46:32.325579 | orchestrator | 2025-06-02 17:46:32.325590 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:46:32.325600 | orchestrator | Monday 02 June 2025 17:44:31 +0000 (0:00:00.346) 0:01:16.019 *********** 2025-06-02 17:46:32.325610 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:46:32.325619 | orchestrator | 2025-06-02 17:46:32.325630 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:46:32.325640 | orchestrator | Monday 02 June 2025 17:44:50 +0000 (0:00:19.268) 0:01:35.288 *********** 2025-06-02 17:46:32.325650 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.325660 | orchestrator | 2025-06-02 17:46:32.325675 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:46:32.325685 | orchestrator | Monday 02 June 2025 17:45:11 +0000 (0:00:20.644) 0:01:55.932 *********** 2025-06-02 17:46:32.325695 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.325704 | orchestrator | 2025-06-02 17:46:32.325714 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 17:46:32.325724 | orchestrator | 2025-06-02 17:46:32.325734 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:46:32.325743 | orchestrator | Monday 02 June 2025 17:45:13 +0000 (0:00:02.535) 0:01:58.468 *********** 2025-06-02 17:46:32.325753 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:46:32.325762 | orchestrator | 2025-06-02 17:46:32.325809 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:46:32.325820 | orchestrator | Monday 02 June 2025 17:45:34 +0000 (0:00:20.690) 0:02:19.159 *********** 2025-06-02 17:46:32.325830 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.325840 | orchestrator | 2025-06-02 17:46:32.325850 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:46:32.325860 | orchestrator | Monday 02 June 2025 17:45:55 +0000 (0:00:20.635) 0:02:39.794 *********** 2025-06-02 17:46:32.325870 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.325880 | orchestrator | 2025-06-02 17:46:32.325890 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 17:46:32.325900 | orchestrator | 2025-06-02 17:46:32.325917 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-02 17:46:32.325927 | orchestrator | Monday 02 June 2025 17:45:58 +0000 (0:00:02.915) 0:02:42.710 *********** 2025-06-02 17:46:32.325937 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.325947 | orchestrator | 2025-06-02 17:46:32.325957 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-02 17:46:32.325968 | orchestrator | Monday 02 June 2025 17:46:10 +0000 (0:00:12.877) 0:02:55.588 *********** 2025-06-02 17:46:32.325985 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.326003 | orchestrator | 2025-06-02 17:46:32.326069 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-02 17:46:32.326088 | orchestrator | Monday 02 June 2025 17:46:16 +0000 (0:00:05.592) 0:03:01.180 *********** 2025-06-02 17:46:32.326104 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.326120 | orchestrator | 2025-06-02 17:46:32.326136 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 17:46:32.326152 | orchestrator | 2025-06-02 17:46:32.326167 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 17:46:32.326183 | orchestrator | Monday 02 June 2025 17:46:19 +0000 (0:00:02.454) 0:03:03.635 *********** 2025-06-02 17:46:32.326198 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:46:32.326213 | orchestrator | 2025-06-02 17:46:32.326229 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-02 17:46:32.326245 | orchestrator | Monday 02 June 2025 17:46:19 +0000 (0:00:00.544) 0:03:04.179 *********** 2025-06-02 17:46:32.326262 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.326279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.326305 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.326322 | orchestrator | 2025-06-02 17:46:32.326339 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-02 17:46:32.326355 | orchestrator | Monday 02 June 2025 17:46:21 +0000 (0:00:02.417) 0:03:06.596 *********** 2025-06-02 17:46:32.326372 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.326389 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.326405 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.326421 | orchestrator | 2025-06-02 17:46:32.326436 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-02 17:46:32.326446 | orchestrator | Monday 02 June 2025 17:46:24 +0000 (0:00:02.049) 0:03:08.646 *********** 2025-06-02 17:46:32.326456 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.326476 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.326485 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.326495 | orchestrator | 2025-06-02 17:46:32.326504 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-02 17:46:32.326514 | orchestrator | Monday 02 June 2025 17:46:26 +0000 (0:00:02.094) 0:03:10.741 *********** 2025-06-02 17:46:32.326524 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.326534 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.326543 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:46:32.326552 | orchestrator | 2025-06-02 17:46:32.326562 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-02 17:46:32.326571 | orchestrator | Monday 02 June 2025 17:46:28 +0000 (0:00:02.177) 0:03:12.919 *********** 2025-06-02 17:46:32.326581 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:46:32.326590 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:46:32.326600 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:46:32.326622 | orchestrator | 2025-06-02 17:46:32.326632 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 17:46:32.326641 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:02.961) 0:03:15.880 *********** 2025-06-02 17:46:32.326651 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:46:32.326660 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:46:32.326670 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:46:32.326680 | orchestrator | 2025-06-02 17:46:32.326690 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:46:32.326700 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-02 17:46:32.326711 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-02 17:46:32.326723 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 17:46:32.326733 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-02 17:46:32.326742 | orchestrator | 2025-06-02 17:46:32.326752 | orchestrator | 2025-06-02 17:46:32.326761 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:46:32.326805 | orchestrator | Monday 02 June 2025 17:46:31 +0000 (0:00:00.244) 0:03:16.125 *********** 2025-06-02 17:46:32.326822 | orchestrator | =============================================================================== 2025-06-02 17:46:32.326832 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.28s 2025-06-02 17:46:32.326842 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 39.96s 2025-06-02 17:46:32.326851 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.88s 2025-06-02 17:46:32.326860 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.86s 2025-06-02 17:46:32.326870 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.10s 2025-06-02 17:46:32.326880 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.06s 2025-06-02 17:46:32.326899 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.59s 2025-06-02 17:46:32.326909 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.45s 2025-06-02 17:46:32.326919 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.22s 2025-06-02 17:46:32.326928 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 4.16s 2025-06-02 17:46:32.326938 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.13s 2025-06-02 17:46:32.326947 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.67s 2025-06-02 17:46:32.326964 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.43s 2025-06-02 17:46:32.326973 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.40s 2025-06-02 17:46:32.326983 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.96s 2025-06-02 17:46:32.326993 | orchestrator | Check MariaDB service --------------------------------------------------- 2.85s 2025-06-02 17:46:32.327002 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.68s 2025-06-02 17:46:32.327011 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.53s 2025-06-02 17:46:32.327021 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.45s 2025-06-02 17:46:32.327031 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.42s 2025-06-02 17:46:32.327047 | orchestrator | 2025-06-02 17:46:32 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:32.327057 | orchestrator | 2025-06-02 17:46:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:35.378527 | orchestrator | 2025-06-02 17:46:35 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:35.378621 | orchestrator | 2025-06-02 17:46:35 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:35.382371 | orchestrator | 2025-06-02 17:46:35 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:35.382459 | orchestrator | 2025-06-02 17:46:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:38.428709 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:38.429667 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:38.431215 | orchestrator | 2025-06-02 17:46:38 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:38.431308 | orchestrator | 2025-06-02 17:46:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:41.472101 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:41.473183 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:41.476465 | orchestrator | 2025-06-02 17:46:41 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:41.476630 | orchestrator | 2025-06-02 17:46:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:44.524703 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:44.525539 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:44.528373 | orchestrator | 2025-06-02 17:46:44 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:44.528419 | orchestrator | 2025-06-02 17:46:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:47.581541 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:47.585203 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:47.587947 | orchestrator | 2025-06-02 17:46:47 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:47.588022 | orchestrator | 2025-06-02 17:46:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:50.630165 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:50.630271 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:50.630973 | orchestrator | 2025-06-02 17:46:50 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:50.631024 | orchestrator | 2025-06-02 17:46:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:53.664609 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:53.664694 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:53.664705 | orchestrator | 2025-06-02 17:46:53 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:53.664714 | orchestrator | 2025-06-02 17:46:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:56.712255 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:56.712900 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:56.714088 | orchestrator | 2025-06-02 17:46:56 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:56.714128 | orchestrator | 2025-06-02 17:46:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:46:59.748700 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:46:59.750334 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:46:59.752570 | orchestrator | 2025-06-02 17:46:59 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:46:59.754122 | orchestrator | 2025-06-02 17:46:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:02.790520 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:02.791715 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:02.791882 | orchestrator | 2025-06-02 17:47:02 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:02.791904 | orchestrator | 2025-06-02 17:47:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:05.833653 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:05.834885 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:05.837033 | orchestrator | 2025-06-02 17:47:05 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:05.837262 | orchestrator | 2025-06-02 17:47:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:08.882216 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:08.883182 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:08.884414 | orchestrator | 2025-06-02 17:47:08 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:08.884439 | orchestrator | 2025-06-02 17:47:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:11.936018 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:11.937978 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:11.940389 | orchestrator | 2025-06-02 17:47:11 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:11.940433 | orchestrator | 2025-06-02 17:47:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:14.984575 | orchestrator | 2025-06-02 17:47:14 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:14.986275 | orchestrator | 2025-06-02 17:47:14 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:14.987303 | orchestrator | 2025-06-02 17:47:14 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:14.987364 | orchestrator | 2025-06-02 17:47:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:18.028030 | orchestrator | 2025-06-02 17:47:18 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:18.030163 | orchestrator | 2025-06-02 17:47:18 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:18.030989 | orchestrator | 2025-06-02 17:47:18 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:18.031019 | orchestrator | 2025-06-02 17:47:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:21.077016 | orchestrator | 2025-06-02 17:47:21 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:21.079533 | orchestrator | 2025-06-02 17:47:21 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:21.081098 | orchestrator | 2025-06-02 17:47:21 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:21.081138 | orchestrator | 2025-06-02 17:47:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:24.135353 | orchestrator | 2025-06-02 17:47:24 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:24.135456 | orchestrator | 2025-06-02 17:47:24 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:24.136929 | orchestrator | 2025-06-02 17:47:24 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:24.136999 | orchestrator | 2025-06-02 17:47:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:27.186809 | orchestrator | 2025-06-02 17:47:27 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:27.188500 | orchestrator | 2025-06-02 17:47:27 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:27.190320 | orchestrator | 2025-06-02 17:47:27 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:27.190595 | orchestrator | 2025-06-02 17:47:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:30.224444 | orchestrator | 2025-06-02 17:47:30 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:30.225133 | orchestrator | 2025-06-02 17:47:30 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:30.226251 | orchestrator | 2025-06-02 17:47:30 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state STARTED 2025-06-02 17:47:30.226272 | orchestrator | 2025-06-02 17:47:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:33.272006 | orchestrator | 2025-06-02 17:47:33 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:33.273346 | orchestrator | 2025-06-02 17:47:33 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:33.274911 | orchestrator | 2025-06-02 17:47:33 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:33.278340 | orchestrator | 2025-06-02 17:47:33 | INFO  | Task 0b7c832b-d734-414e-b17c-085e0b805f5c is in state SUCCESS 2025-06-02 17:47:33.280560 | orchestrator | 2025-06-02 17:47:33.280604 | orchestrator | 2025-06-02 17:47:33.280617 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-02 17:47:33.280629 | orchestrator | 2025-06-02 17:47:33.280640 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-02 17:47:33.280652 | orchestrator | Monday 02 June 2025 17:45:19 +0000 (0:00:00.625) 0:00:00.625 *********** 2025-06-02 17:47:33.280669 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:47:33.281118 | orchestrator | 2025-06-02 17:47:33.281133 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-02 17:47:33.281144 | orchestrator | Monday 02 June 2025 17:45:20 +0000 (0:00:00.706) 0:00:01.332 *********** 2025-06-02 17:47:33.281155 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281167 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281178 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281189 | orchestrator | 2025-06-02 17:47:33.281200 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-02 17:47:33.281211 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.664) 0:00:01.996 *********** 2025-06-02 17:47:33.281222 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281233 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281244 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281254 | orchestrator | 2025-06-02 17:47:33.281265 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-02 17:47:33.281276 | orchestrator | Monday 02 June 2025 17:45:21 +0000 (0:00:00.283) 0:00:02.279 *********** 2025-06-02 17:47:33.281287 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281298 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281308 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281319 | orchestrator | 2025-06-02 17:47:33.281330 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-02 17:47:33.281340 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.828) 0:00:03.107 *********** 2025-06-02 17:47:33.281351 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281362 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281372 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281383 | orchestrator | 2025-06-02 17:47:33.281394 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-02 17:47:33.281404 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.301) 0:00:03.409 *********** 2025-06-02 17:47:33.281415 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281426 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281436 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281510 | orchestrator | 2025-06-02 17:47:33.281896 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-02 17:47:33.281909 | orchestrator | Monday 02 June 2025 17:45:22 +0000 (0:00:00.329) 0:00:03.739 *********** 2025-06-02 17:47:33.281920 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.281931 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.281942 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.281952 | orchestrator | 2025-06-02 17:47:33.281963 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-02 17:47:33.281974 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.319) 0:00:04.058 *********** 2025-06-02 17:47:33.281985 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.281997 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.282007 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.282069 | orchestrator | 2025-06-02 17:47:33.282081 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-02 17:47:33.282113 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.513) 0:00:04.572 *********** 2025-06-02 17:47:33.282124 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.282136 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.282148 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.282159 | orchestrator | 2025-06-02 17:47:33.282169 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-02 17:47:33.282180 | orchestrator | Monday 02 June 2025 17:45:23 +0000 (0:00:00.298) 0:00:04.871 *********** 2025-06-02 17:47:33.282191 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:47:33.282202 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:47:33.282213 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:47:33.282224 | orchestrator | 2025-06-02 17:47:33.282234 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-02 17:47:33.282259 | orchestrator | Monday 02 June 2025 17:45:24 +0000 (0:00:00.669) 0:00:05.540 *********** 2025-06-02 17:47:33.282270 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.282280 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.282291 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.282302 | orchestrator | 2025-06-02 17:47:33.282312 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-02 17:47:33.282323 | orchestrator | Monday 02 June 2025 17:45:25 +0000 (0:00:00.430) 0:00:05.971 *********** 2025-06-02 17:47:33.282334 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:47:33.282345 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:47:33.282355 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:47:33.282366 | orchestrator | 2025-06-02 17:47:33.282377 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-02 17:47:33.282387 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:02.280) 0:00:08.252 *********** 2025-06-02 17:47:33.282398 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:47:33.282409 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:47:33.282420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:47:33.282430 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.282441 | orchestrator | 2025-06-02 17:47:33.282454 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-02 17:47:33.282515 | orchestrator | Monday 02 June 2025 17:45:27 +0000 (0:00:00.416) 0:00:08.668 *********** 2025-06-02 17:47:33.282533 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282549 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282563 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282575 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.282587 | orchestrator | 2025-06-02 17:47:33.282600 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-02 17:47:33.282612 | orchestrator | Monday 02 June 2025 17:45:28 +0000 (0:00:00.809) 0:00:09.477 *********** 2025-06-02 17:47:33.282627 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.282719 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.282732 | orchestrator | 2025-06-02 17:47:33.282743 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-02 17:47:33.282753 | orchestrator | Monday 02 June 2025 17:45:28 +0000 (0:00:00.150) 0:00:09.628 *********** 2025-06-02 17:47:33.282773 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'aeb4d134e7b9', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-02 17:45:25.738025', 'end': '2025-06-02 17:45:25.770540', 'delta': '0:00:00.032515', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['aeb4d134e7b9'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-02 17:47:33.282789 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '983d9ba83449', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-02 17:45:26.563499', 'end': '2025-06-02 17:45:26.610486', 'delta': '0:00:00.046987', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['983d9ba83449'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-02 17:47:33.282840 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '734ce0e2fc26', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-02 17:45:27.120949', 'end': '2025-06-02 17:45:27.151746', 'delta': '0:00:00.030797', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['734ce0e2fc26'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-02 17:47:33.282854 | orchestrator | 2025-06-02 17:47:33.282865 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-02 17:47:33.282876 | orchestrator | Monday 02 June 2025 17:45:29 +0000 (0:00:00.381) 0:00:10.009 *********** 2025-06-02 17:47:33.282895 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.282906 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.282916 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.282927 | orchestrator | 2025-06-02 17:47:33.282938 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-02 17:47:33.282949 | orchestrator | Monday 02 June 2025 17:45:29 +0000 (0:00:00.432) 0:00:10.442 *********** 2025-06-02 17:47:33.282959 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-02 17:47:33.282970 | orchestrator | 2025-06-02 17:47:33.282981 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-02 17:47:33.282992 | orchestrator | Monday 02 June 2025 17:45:31 +0000 (0:00:01.710) 0:00:12.152 *********** 2025-06-02 17:47:33.283002 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283013 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283024 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283034 | orchestrator | 2025-06-02 17:47:33.283045 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-02 17:47:33.283056 | orchestrator | Monday 02 June 2025 17:45:31 +0000 (0:00:00.313) 0:00:12.466 *********** 2025-06-02 17:47:33.283067 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283077 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283088 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283099 | orchestrator | 2025-06-02 17:47:33.283110 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:47:33.283120 | orchestrator | Monday 02 June 2025 17:45:31 +0000 (0:00:00.410) 0:00:12.877 *********** 2025-06-02 17:47:33.283131 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283141 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283152 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283163 | orchestrator | 2025-06-02 17:47:33.283174 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-02 17:47:33.283184 | orchestrator | Monday 02 June 2025 17:45:32 +0000 (0:00:00.491) 0:00:13.368 *********** 2025-06-02 17:47:33.283195 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.283206 | orchestrator | 2025-06-02 17:47:33.283217 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-02 17:47:33.283227 | orchestrator | Monday 02 June 2025 17:45:32 +0000 (0:00:00.145) 0:00:13.513 *********** 2025-06-02 17:47:33.283238 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283248 | orchestrator | 2025-06-02 17:47:33.283259 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-02 17:47:33.283270 | orchestrator | Monday 02 June 2025 17:45:32 +0000 (0:00:00.251) 0:00:13.765 *********** 2025-06-02 17:47:33.283281 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283292 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283302 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283313 | orchestrator | 2025-06-02 17:47:33.283324 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-02 17:47:33.283334 | orchestrator | Monday 02 June 2025 17:45:33 +0000 (0:00:00.308) 0:00:14.074 *********** 2025-06-02 17:47:33.283345 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283356 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283366 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283377 | orchestrator | 2025-06-02 17:47:33.283388 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-02 17:47:33.283420 | orchestrator | Monday 02 June 2025 17:45:33 +0000 (0:00:00.336) 0:00:14.410 *********** 2025-06-02 17:47:33.283443 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283454 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283465 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283476 | orchestrator | 2025-06-02 17:47:33.283487 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-02 17:47:33.283497 | orchestrator | Monday 02 June 2025 17:45:33 +0000 (0:00:00.531) 0:00:14.942 *********** 2025-06-02 17:47:33.283514 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283525 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283546 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283557 | orchestrator | 2025-06-02 17:47:33.283568 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-02 17:47:33.283579 | orchestrator | Monday 02 June 2025 17:45:34 +0000 (0:00:00.371) 0:00:15.314 *********** 2025-06-02 17:47:33.283589 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283600 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283611 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283622 | orchestrator | 2025-06-02 17:47:33.283632 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-02 17:47:33.283643 | orchestrator | Monday 02 June 2025 17:45:34 +0000 (0:00:00.334) 0:00:15.648 *********** 2025-06-02 17:47:33.283654 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283664 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.283842 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.283879 | orchestrator | 2025-06-02 17:47:33.283891 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-02 17:47:33.283965 | orchestrator | Monday 02 June 2025 17:45:35 +0000 (0:00:00.356) 0:00:16.005 *********** 2025-06-02 17:47:33.283978 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.283989 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.284000 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.284011 | orchestrator | 2025-06-02 17:47:33.284022 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-02 17:47:33.284033 | orchestrator | Monday 02 June 2025 17:45:35 +0000 (0:00:00.565) 0:00:16.570 *********** 2025-06-02 17:47:33.284045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534', 'dm-uuid-LVM-C1PeLgF1SxuUfh3ynRcRKoj564FyEqEhCHhSqiIiYbxftGB6XqSANuIyMw54bdoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e', 'dm-uuid-LVM-pFJq6nbtSqDHxlWYzG8pS3VeXlxNepxxO2BGKsksEHWXQF2TkE1j1GjykyBHupHO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284084 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284139 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284181 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284221 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7', 'dm-uuid-LVM-fHoNCxtRreMFFTWOPBe2ysAAlEBwyI3gFg84Qx1fAvx2XHSc65dIcB3OudZopEIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284286 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dtLcbm-BvrF-poUw-P8wK-mlch-Xot4-XRgIij', 'scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a', 'scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84', 'dm-uuid-LVM-9xcVI4TBNfIyK6jFKjrZCWdl0mksa54asOizRAQetCkX2NpAhYr96uEe6IeSNSZ9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDcTuA-YoxY-RB14-ZrH1-jOQP-Bnc2-CbHAFd', 'scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140', 'scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f', 'scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284400 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284412 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284442 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284556 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.284566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L0Uoew-tdG5-5o2e-uK3H-Tk0g-iUQ0-9OmC0S', 'scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3', 'scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284578 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ePsnht-YeWJ-Lf9E-hAE9-dAcD-3nfo-eUnWxm', 'scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62', 'scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4', 'scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284607 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284617 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.284631 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc', 'dm-uuid-LVM-ytups1pI5RQScR8es6EC2ehzveRarGHlbFqc4V4MjMzJo3TlgtjjYi6IsQ2GV1XY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14', 'dm-uuid-LVM-41xUQUmZVztKsWiHhnpwo6xNJtTVNfNAFjLeRlfZIUjvsJzby2C0fsQozgJh83BM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284716 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284733 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284778 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284822 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-02 17:47:33.284861 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CAozE0-JMkL-sS2s-sKDL-CQKZ-VNnx-KvTVaZ', 'scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d', 'scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284898 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pucggk-7A71-e7n9-I93l-XDiI-evfo-q9vyJA', 'scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd', 'scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6', 'scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284928 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-02 17:47:33.284940 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.284951 | orchestrator | 2025-06-02 17:47:33.284962 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-02 17:47:33.284973 | orchestrator | Monday 02 June 2025 17:45:36 +0000 (0:00:00.607) 0:00:17.178 *********** 2025-06-02 17:47:33.284985 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534', 'dm-uuid-LVM-C1PeLgF1SxuUfh3ynRcRKoj564FyEqEhCHhSqiIiYbxftGB6XqSANuIyMw54bdoo'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e', 'dm-uuid-LVM-pFJq6nbtSqDHxlWYzG8pS3VeXlxNepxxO2BGKsksEHWXQF2TkE1j1GjykyBHupHO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285019 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285047 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285066 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285078 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285096 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285108 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285119 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285136 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7', 'dm-uuid-LVM-fHoNCxtRreMFFTWOPBe2ysAAlEBwyI3gFg84Qx1fAvx2XHSc65dIcB3OudZopEIx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285157 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16', 'scsi-SQEMU_QEMU_HARDDISK_99761c60-bcd6-43ee-98a0-4756239a5a12-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285176 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84', 'dm-uuid-LVM-9xcVI4TBNfIyK6jFKjrZCWdl0mksa54asOizRAQetCkX2NpAhYr96uEe6IeSNSZ9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--8450978f--95f9--56a8--b94f--b89f59985534-osd--block--8450978f--95f9--56a8--b94f--b89f59985534'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dtLcbm-BvrF-poUw-P8wK-mlch-Xot4-XRgIij', 'scsi-0QEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a', 'scsi-SQEMU_QEMU_HARDDISK_f446ae25-d9a7-444f-b563-a9cba680652a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--4af7f5ab--70f7--5f81--8195--4d6574833a1e-osd--block--4af7f5ab--70f7--5f81--8195--4d6574833a1e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-UDcTuA-YoxY-RB14-ZrH1-jOQP-Bnc2-CbHAFd', 'scsi-0QEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140', 'scsi-SQEMU_QEMU_HARDDISK_dd4bab9d-0787-4709-bf4e-89aace2da140'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285227 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285247 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f', 'scsi-SQEMU_QEMU_HARDDISK_c7f9d288-1a32-443d-a362-6ba679ef0f8f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285290 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285310 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285322 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285340 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285351 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.285363 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285375 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285401 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16', 'scsi-SQEMU_QEMU_HARDDISK_60870759-8a8b-4186-93b0-9dd809266b84-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285420 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--428bf6aa--16e8--529e--a7f6--02fc5b7007d7-osd--block--428bf6aa--16e8--529e--a7f6--02fc5b7007d7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-L0Uoew-tdG5-5o2e-uK3H-Tk0g-iUQ0-9OmC0S', 'scsi-0QEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3', 'scsi-SQEMU_QEMU_HARDDISK_7ea98d4d-cf7e-4ca7-96c5-3a7dde2a53e3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285432 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--26d332e8--3a94--5f56--adf2--82846ed63b84-osd--block--26d332e8--3a94--5f56--adf2--82846ed63b84'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ePsnht-YeWJ-Lf9E-hAE9-dAcD-3nfo-eUnWxm', 'scsi-0QEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62', 'scsi-SQEMU_QEMU_HARDDISK_cab884bf-6138-4574-8f5c-e044606bea62'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285448 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4', 'scsi-SQEMU_QEMU_HARDDISK_075a40bb-072b-46c1-930e-3c0277237be4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285468 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285487 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc', 'dm-uuid-LVM-ytups1pI5RQScR8es6EC2ehzveRarGHlbFqc4V4MjMzJo3TlgtjjYi6IsQ2GV1XY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285498 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.285510 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14', 'dm-uuid-LVM-41xUQUmZVztKsWiHhnpwo6xNJtTVNfNAFjLeRlfZIUjvsJzby2C0fsQozgJh83BM'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285521 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285537 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285548 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285565 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285583 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285595 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285617 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285642 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16', 'scsi-SQEMU_QEMU_HARDDISK_e83e2705-4f98-41ae-acf9-bfb494f15fd6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285661 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7944d10b--922c--5cd9--bd54--91ce5496d9bc-osd--block--7944d10b--922c--5cd9--bd54--91ce5496d9bc'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-CAozE0-JMkL-sS2s-sKDL-CQKZ-VNnx-KvTVaZ', 'scsi-0QEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d', 'scsi-SQEMU_QEMU_HARDDISK_4a588e14-c726-4684-ac8a-ec1bcbcaf53d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285695 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--455b12e9--4014--57cf--aec2--de5d805a7d14-osd--block--455b12e9--4014--57cf--aec2--de5d805a7d14'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-pucggk-7A71-e7n9-I93l-XDiI-evfo-q9vyJA', 'scsi-0QEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd', 'scsi-SQEMU_QEMU_HARDDISK_42dd6fc7-77c1-48dd-afcf-d774f79f6bbd'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285712 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6', 'scsi-SQEMU_QEMU_HARDDISK_53941cc3-a8ff-45b3-9c82-286f81867ab6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285731 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-02-16-53-49-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-02 17:47:33.285751 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.285762 | orchestrator | 2025-06-02 17:47:33.285774 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-02 17:47:33.285785 | orchestrator | Monday 02 June 2025 17:45:36 +0000 (0:00:00.676) 0:00:17.854 *********** 2025-06-02 17:47:33.285796 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.285813 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.285831 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.285847 | orchestrator | 2025-06-02 17:47:33.285863 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-02 17:47:33.285880 | orchestrator | Monday 02 June 2025 17:45:37 +0000 (0:00:00.717) 0:00:18.571 *********** 2025-06-02 17:47:33.285898 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.285913 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.285929 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.285945 | orchestrator | 2025-06-02 17:47:33.285964 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:47:33.285983 | orchestrator | Monday 02 June 2025 17:45:38 +0000 (0:00:00.487) 0:00:19.058 *********** 2025-06-02 17:47:33.286001 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.286076 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.286091 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.286101 | orchestrator | 2025-06-02 17:47:33.286113 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:47:33.286124 | orchestrator | Monday 02 June 2025 17:45:38 +0000 (0:00:00.673) 0:00:19.732 *********** 2025-06-02 17:47:33.286135 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286146 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286156 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286167 | orchestrator | 2025-06-02 17:47:33.286178 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-02 17:47:33.286188 | orchestrator | Monday 02 June 2025 17:45:39 +0000 (0:00:00.279) 0:00:20.012 *********** 2025-06-02 17:47:33.286199 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286210 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286221 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286232 | orchestrator | 2025-06-02 17:47:33.286242 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-02 17:47:33.286253 | orchestrator | Monday 02 June 2025 17:45:39 +0000 (0:00:00.476) 0:00:20.488 *********** 2025-06-02 17:47:33.286263 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286274 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286285 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286295 | orchestrator | 2025-06-02 17:47:33.286306 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-02 17:47:33.286317 | orchestrator | Monday 02 June 2025 17:45:40 +0000 (0:00:00.537) 0:00:21.026 *********** 2025-06-02 17:47:33.286328 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-02 17:47:33.286339 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-02 17:47:33.286349 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-02 17:47:33.286360 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-02 17:47:33.286371 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-02 17:47:33.286381 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-02 17:47:33.286392 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-02 17:47:33.286412 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-02 17:47:33.286422 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-02 17:47:33.286433 | orchestrator | 2025-06-02 17:47:33.286444 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-02 17:47:33.286455 | orchestrator | Monday 02 June 2025 17:45:40 +0000 (0:00:00.905) 0:00:21.931 *********** 2025-06-02 17:47:33.286466 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-02 17:47:33.286477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-02 17:47:33.286487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-02 17:47:33.286498 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-02 17:47:33.286537 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-02 17:47:33.286564 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-02 17:47:33.286583 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-02 17:47:33.286619 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-02 17:47:33.286638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-02 17:47:33.286656 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286671 | orchestrator | 2025-06-02 17:47:33.286722 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-02 17:47:33.286733 | orchestrator | Monday 02 June 2025 17:45:41 +0000 (0:00:00.352) 0:00:22.284 *********** 2025-06-02 17:47:33.286745 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:47:33.286756 | orchestrator | 2025-06-02 17:47:33.286767 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-02 17:47:33.286780 | orchestrator | Monday 02 June 2025 17:45:42 +0000 (0:00:00.785) 0:00:23.069 *********** 2025-06-02 17:47:33.286791 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286802 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286812 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286823 | orchestrator | 2025-06-02 17:47:33.286846 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-02 17:47:33.286857 | orchestrator | Monday 02 June 2025 17:45:42 +0000 (0:00:00.348) 0:00:23.418 *********** 2025-06-02 17:47:33.286868 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286878 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286889 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286900 | orchestrator | 2025-06-02 17:47:33.286911 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-02 17:47:33.286922 | orchestrator | Monday 02 June 2025 17:45:42 +0000 (0:00:00.310) 0:00:23.728 *********** 2025-06-02 17:47:33.286933 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.286944 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.286954 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:47:33.286965 | orchestrator | 2025-06-02 17:47:33.286976 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-02 17:47:33.286987 | orchestrator | Monday 02 June 2025 17:45:43 +0000 (0:00:00.314) 0:00:24.042 *********** 2025-06-02 17:47:33.286998 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.287009 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.287020 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.287030 | orchestrator | 2025-06-02 17:47:33.287041 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-02 17:47:33.287052 | orchestrator | Monday 02 June 2025 17:45:43 +0000 (0:00:00.631) 0:00:24.674 *********** 2025-06-02 17:47:33.287063 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:47:33.287085 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:47:33.287097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:47:33.287108 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.287118 | orchestrator | 2025-06-02 17:47:33.287130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-02 17:47:33.287140 | orchestrator | Monday 02 June 2025 17:45:44 +0000 (0:00:00.387) 0:00:25.061 *********** 2025-06-02 17:47:33.287151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:47:33.287162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:47:33.287174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:47:33.287185 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.287196 | orchestrator | 2025-06-02 17:47:33.287207 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-02 17:47:33.287218 | orchestrator | Monday 02 June 2025 17:45:44 +0000 (0:00:00.399) 0:00:25.461 *********** 2025-06-02 17:47:33.287229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-02 17:47:33.287240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-02 17:47:33.287251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-02 17:47:33.287263 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.287274 | orchestrator | 2025-06-02 17:47:33.287285 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-02 17:47:33.287296 | orchestrator | Monday 02 June 2025 17:45:44 +0000 (0:00:00.373) 0:00:25.835 *********** 2025-06-02 17:47:33.287307 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:47:33.287318 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:47:33.287329 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:47:33.287340 | orchestrator | 2025-06-02 17:47:33.287350 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-02 17:47:33.287361 | orchestrator | Monday 02 June 2025 17:45:45 +0000 (0:00:00.324) 0:00:26.159 *********** 2025-06-02 17:47:33.287372 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-02 17:47:33.287383 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-02 17:47:33.287394 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-02 17:47:33.287406 | orchestrator | 2025-06-02 17:47:33.287417 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-02 17:47:33.287428 | orchestrator | Monday 02 June 2025 17:45:45 +0000 (0:00:00.501) 0:00:26.661 *********** 2025-06-02 17:47:33.287439 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:47:33.287449 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:47:33.287460 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:47:33.287471 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 17:47:33.287493 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:47:33.287505 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:47:33.287516 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:47:33.287527 | orchestrator | 2025-06-02 17:47:33.287538 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-02 17:47:33.287549 | orchestrator | Monday 02 June 2025 17:45:46 +0000 (0:00:00.982) 0:00:27.643 *********** 2025-06-02 17:47:33.287560 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-02 17:47:33.287571 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-02 17:47:33.287582 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-02 17:47:33.287593 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-02 17:47:33.287611 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-02 17:47:33.287622 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-02 17:47:33.287633 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-02 17:47:33.287644 | orchestrator | 2025-06-02 17:47:33.287661 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-02 17:47:33.287713 | orchestrator | Monday 02 June 2025 17:45:48 +0000 (0:00:02.039) 0:00:29.682 *********** 2025-06-02 17:47:33.287726 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:47:33.287738 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:47:33.287749 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-02 17:47:33.287760 | orchestrator | 2025-06-02 17:47:33.287771 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-02 17:47:33.287782 | orchestrator | Monday 02 June 2025 17:45:49 +0000 (0:00:00.398) 0:00:30.081 *********** 2025-06-02 17:47:33.287795 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:47:33.287809 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:47:33.287820 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:47:33.287832 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:47:33.287844 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-02 17:47:33.287855 | orchestrator | 2025-06-02 17:47:33.287866 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-02 17:47:33.287878 | orchestrator | Monday 02 June 2025 17:46:35 +0000 (0:00:46.065) 0:01:16.147 *********** 2025-06-02 17:47:33.287888 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287899 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287921 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287932 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287943 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.287954 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-02 17:47:33.287964 | orchestrator | 2025-06-02 17:47:33.287975 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-02 17:47:33.287986 | orchestrator | Monday 02 June 2025 17:47:00 +0000 (0:00:25.450) 0:01:41.597 *********** 2025-06-02 17:47:33.287997 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288019 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288036 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288047 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288058 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288069 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288080 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-02 17:47:33.288091 | orchestrator | 2025-06-02 17:47:33.288102 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-02 17:47:33.288113 | orchestrator | Monday 02 June 2025 17:47:12 +0000 (0:00:12.264) 0:01:53.862 *********** 2025-06-02 17:47:33.288124 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288135 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288145 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288157 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288168 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288179 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288199 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288211 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288222 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288232 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288243 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288254 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288265 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288276 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288287 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288298 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-02 17:47:33.288309 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-02 17:47:33.288320 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-02 17:47:33.288331 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-02 17:47:33.288342 | orchestrator | 2025-06-02 17:47:33.288352 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:47:33.288363 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-02 17:47:33.288376 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 17:47:33.288387 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-02 17:47:33.288398 | orchestrator | 2025-06-02 17:47:33.288408 | orchestrator | 2025-06-02 17:47:33.288419 | orchestrator | 2025-06-02 17:47:33.288430 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:47:33.288440 | orchestrator | Monday 02 June 2025 17:47:30 +0000 (0:00:17.860) 0:02:11.723 *********** 2025-06-02 17:47:33.288451 | orchestrator | =============================================================================== 2025-06-02 17:47:33.288472 | orchestrator | create openstack pool(s) ----------------------------------------------- 46.07s 2025-06-02 17:47:33.288483 | orchestrator | generate keys ---------------------------------------------------------- 25.45s 2025-06-02 17:47:33.288493 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.86s 2025-06-02 17:47:33.288504 | orchestrator | get keys from monitors ------------------------------------------------- 12.26s 2025-06-02 17:47:33.288515 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.28s 2025-06-02 17:47:33.288525 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.04s 2025-06-02 17:47:33.288536 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2025-06-02 17:47:33.288547 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.98s 2025-06-02 17:47:33.288558 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.91s 2025-06-02 17:47:33.288568 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-06-02 17:47:33.288579 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.81s 2025-06-02 17:47:33.288590 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.79s 2025-06-02 17:47:33.288600 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.72s 2025-06-02 17:47:33.288611 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-06-02 17:47:33.288622 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.68s 2025-06-02 17:47:33.288638 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.67s 2025-06-02 17:47:33.288649 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.67s 2025-06-02 17:47:33.288660 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-06-02 17:47:33.288671 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.63s 2025-06-02 17:47:33.288698 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2025-06-02 17:47:33.288709 | orchestrator | 2025-06-02 17:47:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:36.325037 | orchestrator | 2025-06-02 17:47:36 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:36.327441 | orchestrator | 2025-06-02 17:47:36 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:36.329991 | orchestrator | 2025-06-02 17:47:36 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:36.330141 | orchestrator | 2025-06-02 17:47:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:39.372886 | orchestrator | 2025-06-02 17:47:39 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:39.375346 | orchestrator | 2025-06-02 17:47:39 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:39.378251 | orchestrator | 2025-06-02 17:47:39 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:39.378736 | orchestrator | 2025-06-02 17:47:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:42.431305 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:42.433081 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:42.435292 | orchestrator | 2025-06-02 17:47:42 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:42.435366 | orchestrator | 2025-06-02 17:47:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:45.493946 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:45.496142 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:45.501781 | orchestrator | 2025-06-02 17:47:45 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:45.501845 | orchestrator | 2025-06-02 17:47:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:48.547387 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:48.549187 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:48.551805 | orchestrator | 2025-06-02 17:47:48 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:48.551860 | orchestrator | 2025-06-02 17:47:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:51.601958 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:51.602118 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:51.604451 | orchestrator | 2025-06-02 17:47:51 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:51.604854 | orchestrator | 2025-06-02 17:47:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:54.657492 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:54.657716 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:54.660190 | orchestrator | 2025-06-02 17:47:54 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:54.660241 | orchestrator | 2025-06-02 17:47:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:47:57.719447 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:47:57.721782 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:47:57.723671 | orchestrator | 2025-06-02 17:47:57 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:47:57.723740 | orchestrator | 2025-06-02 17:47:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:00.771883 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:00.772693 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state STARTED 2025-06-02 17:48:00.774210 | orchestrator | 2025-06-02 17:48:00 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:00.774251 | orchestrator | 2025-06-02 17:48:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:03.835182 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:03.835826 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task f4ad1a08-6d8b-4fb1-976f-69eab9050263 is in state SUCCESS 2025-06-02 17:48:03.837885 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:03.839731 | orchestrator | 2025-06-02 17:48:03 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:03.839814 | orchestrator | 2025-06-02 17:48:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:06.896059 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:06.898590 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:06.901147 | orchestrator | 2025-06-02 17:48:06 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:06.901202 | orchestrator | 2025-06-02 17:48:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:09.956185 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:09.957564 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:09.959506 | orchestrator | 2025-06-02 17:48:09 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:09.959556 | orchestrator | 2025-06-02 17:48:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:13.014485 | orchestrator | 2025-06-02 17:48:13 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:13.014576 | orchestrator | 2025-06-02 17:48:13 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:13.015637 | orchestrator | 2025-06-02 17:48:13 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:13.015665 | orchestrator | 2025-06-02 17:48:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:16.060705 | orchestrator | 2025-06-02 17:48:16 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:16.064673 | orchestrator | 2025-06-02 17:48:16 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:16.068881 | orchestrator | 2025-06-02 17:48:16 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:16.068928 | orchestrator | 2025-06-02 17:48:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:19.111617 | orchestrator | 2025-06-02 17:48:19 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:19.113549 | orchestrator | 2025-06-02 17:48:19 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:19.115787 | orchestrator | 2025-06-02 17:48:19 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:19.115831 | orchestrator | 2025-06-02 17:48:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:22.154194 | orchestrator | 2025-06-02 17:48:22 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:22.155469 | orchestrator | 2025-06-02 17:48:22 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state STARTED 2025-06-02 17:48:22.157554 | orchestrator | 2025-06-02 17:48:22 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:22.157586 | orchestrator | 2025-06-02 17:48:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:25.191693 | orchestrator | 2025-06-02 17:48:25 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:25.195985 | orchestrator | 2025-06-02 17:48:25.196030 | orchestrator | 2025-06-02 17:48:25.196038 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-02 17:48:25.196046 | orchestrator | 2025-06-02 17:48:25.196066 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-02 17:48:25.196074 | orchestrator | Monday 02 June 2025 17:47:35 +0000 (0:00:00.160) 0:00:00.160 *********** 2025-06-02 17:48:25.196080 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-02 17:48:25.196106 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196118 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:48:25.196125 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196131 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-02 17:48:25.196137 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-02 17:48:25.196143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:48:25.196149 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-02 17:48:25.196155 | orchestrator | 2025-06-02 17:48:25.196161 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-02 17:48:25.196167 | orchestrator | Monday 02 June 2025 17:47:39 +0000 (0:00:04.111) 0:00:04.271 *********** 2025-06-02 17:48:25.196174 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-02 17:48:25.196180 | orchestrator | 2025-06-02 17:48:25.196186 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-02 17:48:25.196192 | orchestrator | Monday 02 June 2025 17:47:40 +0000 (0:00:01.017) 0:00:05.288 *********** 2025-06-02 17:48:25.196198 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-02 17:48:25.196205 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196211 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196217 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:48:25.196223 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196229 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-02 17:48:25.196235 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-02 17:48:25.196241 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:48:25.196247 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-02 17:48:25.196253 | orchestrator | 2025-06-02 17:48:25.196260 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-02 17:48:25.196266 | orchestrator | Monday 02 June 2025 17:47:54 +0000 (0:00:13.743) 0:00:19.032 *********** 2025-06-02 17:48:25.196272 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-02 17:48:25.196278 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196284 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196290 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:48:25.196296 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-02 17:48:25.196303 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-02 17:48:25.196309 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-02 17:48:25.196315 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-02 17:48:25.196321 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-02 17:48:25.196327 | orchestrator | 2025-06-02 17:48:25.196333 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:48:25.196344 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:48:25.196352 | orchestrator | 2025-06-02 17:48:25.196358 | orchestrator | 2025-06-02 17:48:25.196364 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:48:25.196370 | orchestrator | Monday 02 June 2025 17:48:01 +0000 (0:00:06.980) 0:00:26.012 *********** 2025-06-02 17:48:25.196376 | orchestrator | =============================================================================== 2025-06-02 17:48:25.196382 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.74s 2025-06-02 17:48:25.196388 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.98s 2025-06-02 17:48:25.196610 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.11s 2025-06-02 17:48:25.196619 | orchestrator | Create share directory -------------------------------------------------- 1.02s 2025-06-02 17:48:25.196625 | orchestrator | 2025-06-02 17:48:25.196675 | orchestrator | 2025-06-02 17:48:25.196683 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:48:25.196689 | orchestrator | 2025-06-02 17:48:25.196706 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:48:25.196712 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-06-02 17:48:25.196725 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.196732 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.196738 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.196745 | orchestrator | 2025-06-02 17:48:25.196751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:48:25.196758 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.289) 0:00:00.555 *********** 2025-06-02 17:48:25.196764 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-02 17:48:25.196771 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-02 17:48:25.196777 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-02 17:48:25.196783 | orchestrator | 2025-06-02 17:48:25.196789 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-02 17:48:25.196795 | orchestrator | 2025-06-02 17:48:25.196802 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:48:25.196808 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.412) 0:00:00.968 *********** 2025-06-02 17:48:25.196814 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:25.196821 | orchestrator | 2025-06-02 17:48:25.196827 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-02 17:48:25.196833 | orchestrator | Monday 02 June 2025 17:46:37 +0000 (0:00:00.511) 0:00:01.479 *********** 2025-06-02 17:48:25.196845 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.196876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.196885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.196896 | orchestrator | 2025-06-02 17:48:25.196903 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-02 17:48:25.196909 | orchestrator | Monday 02 June 2025 17:46:38 +0000 (0:00:01.103) 0:00:02.582 *********** 2025-06-02 17:48:25.196915 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.196922 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.196928 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.196934 | orchestrator | 2025-06-02 17:48:25.196940 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:48:25.196946 | orchestrator | Monday 02 June 2025 17:46:38 +0000 (0:00:00.463) 0:00:03.046 *********** 2025-06-02 17:48:25.196952 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:48:25.196958 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:48:25.196968 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:48:25.196975 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:48:25.196984 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:48:25.196990 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:48:25.196996 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:48:25.197002 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:48:25.197008 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:48:25.197014 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:48:25.197020 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:48:25.197026 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:48:25.197032 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:48:25.197039 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:48:25.197045 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:48:25.197051 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:48:25.197057 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-02 17:48:25.197063 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-02 17:48:25.197069 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-02 17:48:25.197079 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-02 17:48:25.197085 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-02 17:48:25.197091 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-02 17:48:25.197097 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-02 17:48:25.197103 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-02 17:48:25.197111 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-02 17:48:25.197119 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-02 17:48:25.197125 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-02 17:48:25.197131 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-02 17:48:25.197137 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-02 17:48:25.197144 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-02 17:48:25.197150 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-02 17:48:25.197156 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-02 17:48:25.197162 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-02 17:48:25.197168 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-02 17:48:25.197174 | orchestrator | 2025-06-02 17:48:25.197181 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197187 | orchestrator | Monday 02 June 2025 17:46:39 +0000 (0:00:00.737) 0:00:03.784 *********** 2025-06-02 17:48:25.197193 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197199 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197205 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197211 | orchestrator | 2025-06-02 17:48:25.197217 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.197224 | orchestrator | Monday 02 June 2025 17:46:39 +0000 (0:00:00.284) 0:00:04.069 *********** 2025-06-02 17:48:25.197231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197238 | orchestrator | 2025-06-02 17:48:25.197245 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.197255 | orchestrator | Monday 02 June 2025 17:46:40 +0000 (0:00:00.122) 0:00:04.192 *********** 2025-06-02 17:48:25.197263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197270 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.197277 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.197284 | orchestrator | 2025-06-02 17:48:25.197295 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197302 | orchestrator | Monday 02 June 2025 17:46:40 +0000 (0:00:00.524) 0:00:04.716 *********** 2025-06-02 17:48:25.197310 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197321 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197328 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197335 | orchestrator | 2025-06-02 17:48:25.197342 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.197349 | orchestrator | Monday 02 June 2025 17:46:40 +0000 (0:00:00.301) 0:00:05.017 *********** 2025-06-02 17:48:25.197356 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197363 | orchestrator | 2025-06-02 17:48:25.197370 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.197377 | orchestrator | Monday 02 June 2025 17:46:40 +0000 (0:00:00.135) 0:00:05.153 *********** 2025-06-02 17:48:25.197384 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197391 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.197398 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.197405 | orchestrator | 2025-06-02 17:48:25.197411 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197417 | orchestrator | Monday 02 June 2025 17:46:41 +0000 (0:00:00.327) 0:00:05.481 *********** 2025-06-02 17:48:25.197423 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197429 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197435 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197441 | orchestrator | 2025-06-02 17:48:25.197447 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.197453 | orchestrator | Monday 02 June 2025 17:46:41 +0000 (0:00:00.304) 0:00:05.785 *********** 2025-06-02 17:48:25.197459 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197465 | orchestrator | 2025-06-02 17:48:25.197471 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.197477 | orchestrator | Monday 02 June 2025 17:46:41 +0000 (0:00:00.325) 0:00:06.111 *********** 2025-06-02 17:48:25.197483 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.197495 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.197501 | orchestrator | 2025-06-02 17:48:25.197507 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197513 | orchestrator | Monday 02 June 2025 17:46:42 +0000 (0:00:00.286) 0:00:06.398 *********** 2025-06-02 17:48:25.197519 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197525 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197531 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197537 | orchestrator | 2025-06-02 17:48:25.197543 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.197549 | orchestrator | Monday 02 June 2025 17:46:42 +0000 (0:00:00.330) 0:00:06.728 *********** 2025-06-02 17:48:25.197555 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197562 | orchestrator | 2025-06-02 17:48:25.197573 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.197584 | orchestrator | Monday 02 June 2025 17:46:42 +0000 (0:00:00.122) 0:00:06.851 *********** 2025-06-02 17:48:25.197593 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.197613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.197623 | orchestrator | 2025-06-02 17:48:25.197679 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197691 | orchestrator | Monday 02 June 2025 17:46:42 +0000 (0:00:00.307) 0:00:07.159 *********** 2025-06-02 17:48:25.197700 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197709 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197718 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197727 | orchestrator | 2025-06-02 17:48:25.197737 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.197857 | orchestrator | Monday 02 June 2025 17:46:43 +0000 (0:00:00.552) 0:00:07.711 *********** 2025-06-02 17:48:25.197875 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197886 | orchestrator | 2025-06-02 17:48:25.197897 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.197918 | orchestrator | Monday 02 June 2025 17:46:43 +0000 (0:00:00.144) 0:00:07.855 *********** 2025-06-02 17:48:25.197928 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.197939 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.197949 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.197955 | orchestrator | 2025-06-02 17:48:25.197961 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.197967 | orchestrator | Monday 02 June 2025 17:46:43 +0000 (0:00:00.278) 0:00:08.133 *********** 2025-06-02 17:48:25.197973 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.197979 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.197986 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.197992 | orchestrator | 2025-06-02 17:48:25.197998 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.198004 | orchestrator | Monday 02 June 2025 17:46:44 +0000 (0:00:00.305) 0:00:08.438 *********** 2025-06-02 17:48:25.198010 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198058 | orchestrator | 2025-06-02 17:48:25.198066 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.198072 | orchestrator | Monday 02 June 2025 17:46:44 +0000 (0:00:00.129) 0:00:08.568 *********** 2025-06-02 17:48:25.198078 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198085 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198091 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198097 | orchestrator | 2025-06-02 17:48:25.198103 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.198109 | orchestrator | Monday 02 June 2025 17:46:44 +0000 (0:00:00.523) 0:00:09.092 *********** 2025-06-02 17:48:25.198115 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.198121 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.198128 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.198134 | orchestrator | 2025-06-02 17:48:25.198147 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.198153 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:00.315) 0:00:09.407 *********** 2025-06-02 17:48:25.198165 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198171 | orchestrator | 2025-06-02 17:48:25.198177 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.198184 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:00.126) 0:00:09.534 *********** 2025-06-02 17:48:25.198190 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198196 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198202 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198208 | orchestrator | 2025-06-02 17:48:25.198214 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.198220 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:00.347) 0:00:09.881 *********** 2025-06-02 17:48:25.198226 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.198232 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.198238 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.198245 | orchestrator | 2025-06-02 17:48:25.198251 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.198257 | orchestrator | Monday 02 June 2025 17:46:46 +0000 (0:00:00.374) 0:00:10.255 *********** 2025-06-02 17:48:25.198263 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198269 | orchestrator | 2025-06-02 17:48:25.198275 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.198281 | orchestrator | Monday 02 June 2025 17:46:46 +0000 (0:00:00.142) 0:00:10.397 *********** 2025-06-02 17:48:25.198287 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198293 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198299 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198305 | orchestrator | 2025-06-02 17:48:25.198311 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.198328 | orchestrator | Monday 02 June 2025 17:46:46 +0000 (0:00:00.585) 0:00:10.983 *********** 2025-06-02 17:48:25.198334 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.198340 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.198346 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.198352 | orchestrator | 2025-06-02 17:48:25.198358 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.198365 | orchestrator | Monday 02 June 2025 17:46:47 +0000 (0:00:00.344) 0:00:11.328 *********** 2025-06-02 17:48:25.198371 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198377 | orchestrator | 2025-06-02 17:48:25.198383 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.198389 | orchestrator | Monday 02 June 2025 17:46:47 +0000 (0:00:00.124) 0:00:11.452 *********** 2025-06-02 17:48:25.198395 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198401 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198407 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198413 | orchestrator | 2025-06-02 17:48:25.198419 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-02 17:48:25.198425 | orchestrator | Monday 02 June 2025 17:46:47 +0000 (0:00:00.350) 0:00:11.803 *********** 2025-06-02 17:48:25.198431 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:48:25.198437 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:48:25.198443 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:48:25.198449 | orchestrator | 2025-06-02 17:48:25.198455 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-02 17:48:25.198461 | orchestrator | Monday 02 June 2025 17:46:48 +0000 (0:00:00.662) 0:00:12.465 *********** 2025-06-02 17:48:25.198467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198473 | orchestrator | 2025-06-02 17:48:25.198479 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-02 17:48:25.198486 | orchestrator | Monday 02 June 2025 17:46:48 +0000 (0:00:00.126) 0:00:12.592 *********** 2025-06-02 17:48:25.198493 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198501 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198508 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198515 | orchestrator | 2025-06-02 17:48:25.198522 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-02 17:48:25.198529 | orchestrator | Monday 02 June 2025 17:46:48 +0000 (0:00:00.331) 0:00:12.923 *********** 2025-06-02 17:48:25.198536 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:25.198543 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:25.198550 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:25.198557 | orchestrator | 2025-06-02 17:48:25.198564 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-02 17:48:25.198571 | orchestrator | Monday 02 June 2025 17:46:50 +0000 (0:00:01.723) 0:00:14.646 *********** 2025-06-02 17:48:25.198577 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:48:25.198585 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:48:25.198592 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-02 17:48:25.198599 | orchestrator | 2025-06-02 17:48:25.198606 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-02 17:48:25.198612 | orchestrator | Monday 02 June 2025 17:46:52 +0000 (0:00:01.988) 0:00:16.635 *********** 2025-06-02 17:48:25.198620 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:48:25.198627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:48:25.198651 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-02 17:48:25.198658 | orchestrator | 2025-06-02 17:48:25.198665 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-02 17:48:25.198676 | orchestrator | Monday 02 June 2025 17:46:54 +0000 (0:00:02.343) 0:00:18.979 *********** 2025-06-02 17:48:25.198688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:48:25.198700 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:48:25.198707 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-02 17:48:25.198714 | orchestrator | 2025-06-02 17:48:25.198722 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-02 17:48:25.198729 | orchestrator | Monday 02 June 2025 17:46:56 +0000 (0:00:01.504) 0:00:20.484 *********** 2025-06-02 17:48:25.198735 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198741 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198748 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198754 | orchestrator | 2025-06-02 17:48:25.198760 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-02 17:48:25.198766 | orchestrator | Monday 02 June 2025 17:46:56 +0000 (0:00:00.280) 0:00:20.765 *********** 2025-06-02 17:48:25.198772 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198778 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198784 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198790 | orchestrator | 2025-06-02 17:48:25.198796 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:48:25.198803 | orchestrator | Monday 02 June 2025 17:46:56 +0000 (0:00:00.274) 0:00:21.040 *********** 2025-06-02 17:48:25.198809 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:25.198815 | orchestrator | 2025-06-02 17:48:25.198821 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-02 17:48:25.198827 | orchestrator | Monday 02 June 2025 17:46:57 +0000 (0:00:00.768) 0:00:21.808 *********** 2025-06-02 17:48:25.198835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.198857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.198865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.198877 | orchestrator | 2025-06-02 17:48:25.198884 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-02 17:48:25.198890 | orchestrator | Monday 02 June 2025 17:46:59 +0000 (0:00:01.494) 0:00:23.303 *********** 2025-06-02 17:48:25.198902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE2025-06-02 17:48:25 | INFO  | Task b3c88258-35f5-4b57-b1d8-25accc46387e is in state SUCCESS 2025-06-02 17:48:25.198914 | orchestrator | ': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.198921 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.198932 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.198943 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.198954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.198961 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.198967 | orchestrator | 2025-06-02 17:48:25.198973 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-02 17:48:25.198979 | orchestrator | Monday 02 June 2025 17:46:59 +0000 (0:00:00.642) 0:00:23.945 *********** 2025-06-02 17:48:25.198995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.199008 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.199014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.199025 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.199040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-02 17:48:25.199048 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.199054 | orchestrator | 2025-06-02 17:48:25.199060 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-02 17:48:25.199066 | orchestrator | Monday 02 June 2025 17:47:00 +0000 (0:00:01.058) 0:00:25.004 *********** 2025-06-02 17:48:25.199072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.199097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.199105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-02 17:48:25.199118 | orchestrator | 2025-06-02 17:48:25.199124 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:48:25.199130 | orchestrator | Monday 02 June 2025 17:47:02 +0000 (0:00:01.302) 0:00:26.307 *********** 2025-06-02 17:48:25.199136 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:48:25.199142 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:48:25.199148 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:48:25.199155 | orchestrator | 2025-06-02 17:48:25.199161 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-02 17:48:25.199167 | orchestrator | Monday 02 June 2025 17:47:02 +0000 (0:00:00.279) 0:00:26.586 *********** 2025-06-02 17:48:25.199173 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:48:25.199179 | orchestrator | 2025-06-02 17:48:25.199185 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-02 17:48:25.199198 | orchestrator | Monday 02 June 2025 17:47:03 +0000 (0:00:00.758) 0:00:27.345 *********** 2025-06-02 17:48:25.199209 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:25.199219 | orchestrator | 2025-06-02 17:48:25.199235 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-02 17:48:25.199245 | orchestrator | Monday 02 June 2025 17:47:05 +0000 (0:00:02.255) 0:00:29.600 *********** 2025-06-02 17:48:25.199254 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:25.199260 | orchestrator | 2025-06-02 17:48:25.199266 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-02 17:48:25.199272 | orchestrator | Monday 02 June 2025 17:47:07 +0000 (0:00:02.089) 0:00:31.690 *********** 2025-06-02 17:48:25.199279 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:25.199285 | orchestrator | 2025-06-02 17:48:25.199291 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:48:25.199297 | orchestrator | Monday 02 June 2025 17:47:22 +0000 (0:00:15.338) 0:00:47.028 *********** 2025-06-02 17:48:25.199303 | orchestrator | 2025-06-02 17:48:25.199309 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:48:25.199315 | orchestrator | Monday 02 June 2025 17:47:22 +0000 (0:00:00.064) 0:00:47.093 *********** 2025-06-02 17:48:25.199321 | orchestrator | 2025-06-02 17:48:25.199327 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-02 17:48:25.199333 | orchestrator | Monday 02 June 2025 17:47:22 +0000 (0:00:00.065) 0:00:47.158 *********** 2025-06-02 17:48:25.199339 | orchestrator | 2025-06-02 17:48:25.199345 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-02 17:48:25.199351 | orchestrator | Monday 02 June 2025 17:47:23 +0000 (0:00:00.066) 0:00:47.225 *********** 2025-06-02 17:48:25.199357 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:48:25.199363 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:48:25.199369 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:48:25.199375 | orchestrator | 2025-06-02 17:48:25.199381 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:48:25.199392 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-02 17:48:25.199399 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 17:48:25.199405 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-02 17:48:25.199411 | orchestrator | 2025-06-02 17:48:25.199417 | orchestrator | 2025-06-02 17:48:25.199423 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:48:25.199430 | orchestrator | Monday 02 June 2025 17:48:24 +0000 (0:01:01.755) 0:01:48.980 *********** 2025-06-02 17:48:25.199436 | orchestrator | =============================================================================== 2025-06-02 17:48:25.199442 | orchestrator | horizon : Restart horizon container ------------------------------------ 61.76s 2025-06-02 17:48:25.199448 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.34s 2025-06-02 17:48:25.199454 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.34s 2025-06-02 17:48:25.199460 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.26s 2025-06-02 17:48:25.199466 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.09s 2025-06-02 17:48:25.199473 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.99s 2025-06-02 17:48:25.199479 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.72s 2025-06-02 17:48:25.199485 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.50s 2025-06-02 17:48:25.199491 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.49s 2025-06-02 17:48:25.199497 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.30s 2025-06-02 17:48:25.199503 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.10s 2025-06-02 17:48:25.199509 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.06s 2025-06-02 17:48:25.199515 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-06-02 17:48:25.199521 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.76s 2025-06-02 17:48:25.199527 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-06-02 17:48:25.199533 | orchestrator | horizon : Update policy file name --------------------------------------- 0.66s 2025-06-02 17:48:25.199539 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.64s 2025-06-02 17:48:25.199545 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.59s 2025-06-02 17:48:25.199551 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-06-02 17:48:25.199557 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.52s 2025-06-02 17:48:25.199563 | orchestrator | 2025-06-02 17:48:25 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:25.199570 | orchestrator | 2025-06-02 17:48:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:28.240037 | orchestrator | 2025-06-02 17:48:28 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:28.242547 | orchestrator | 2025-06-02 17:48:28 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:28.242924 | orchestrator | 2025-06-02 17:48:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:31.282467 | orchestrator | 2025-06-02 17:48:31 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:31.284596 | orchestrator | 2025-06-02 17:48:31 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:31.284756 | orchestrator | 2025-06-02 17:48:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:34.321890 | orchestrator | 2025-06-02 17:48:34 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:34.324037 | orchestrator | 2025-06-02 17:48:34 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:34.324087 | orchestrator | 2025-06-02 17:48:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:37.375253 | orchestrator | 2025-06-02 17:48:37 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:37.377200 | orchestrator | 2025-06-02 17:48:37 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:37.377246 | orchestrator | 2025-06-02 17:48:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:40.418437 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:40.420582 | orchestrator | 2025-06-02 17:48:40 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:40.420648 | orchestrator | 2025-06-02 17:48:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:43.453464 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:43.454969 | orchestrator | 2025-06-02 17:48:43 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:43.455036 | orchestrator | 2025-06-02 17:48:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:46.485039 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:46.485880 | orchestrator | 2025-06-02 17:48:46 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:46.485956 | orchestrator | 2025-06-02 17:48:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:49.525865 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:49.527019 | orchestrator | 2025-06-02 17:48:49 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:49.527060 | orchestrator | 2025-06-02 17:48:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:52.569243 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:52.571483 | orchestrator | 2025-06-02 17:48:52 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state STARTED 2025-06-02 17:48:52.571518 | orchestrator | 2025-06-02 17:48:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:55.610442 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:55.614271 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:48:55.619221 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task e147e82b-bd5a-4428-99e6-6e2503a35512 is in state STARTED 2025-06-02 17:48:55.619297 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task 367e4860-b0ca-46ad-8128-bd8b3403e387 is in state SUCCESS 2025-06-02 17:48:55.620464 | orchestrator | 2025-06-02 17:48:55 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:48:55.621448 | orchestrator | 2025-06-02 17:48:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:48:58.666928 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:48:58.667786 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:48:58.669130 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task e147e82b-bd5a-4428-99e6-6e2503a35512 is in state STARTED 2025-06-02 17:48:58.670236 | orchestrator | 2025-06-02 17:48:58 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:48:58.670273 | orchestrator | 2025-06-02 17:48:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:01.702272 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:01.702371 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:01.702380 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task e147e82b-bd5a-4428-99e6-6e2503a35512 is in state SUCCESS 2025-06-02 17:49:01.702385 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:01.705805 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:01.705901 | orchestrator | 2025-06-02 17:49:01 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:01.705917 | orchestrator | 2025-06-02 17:49:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:04.760771 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:04.761005 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:04.761039 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:04.762282 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:04.763267 | orchestrator | 2025-06-02 17:49:04 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:04.765147 | orchestrator | 2025-06-02 17:49:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:07.803910 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:07.804116 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:07.806144 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:07.806430 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:07.807398 | orchestrator | 2025-06-02 17:49:07 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:07.807426 | orchestrator | 2025-06-02 17:49:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:10.855898 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:10.856599 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:10.857676 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:10.858837 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:10.860155 | orchestrator | 2025-06-02 17:49:10 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:10.860238 | orchestrator | 2025-06-02 17:49:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:13.912621 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:13.913471 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:13.915350 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:13.916985 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:13.918365 | orchestrator | 2025-06-02 17:49:13 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:13.918410 | orchestrator | 2025-06-02 17:49:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:16.951568 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:16.951697 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:16.955550 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:16.955636 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:16.955649 | orchestrator | 2025-06-02 17:49:16 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:16.955696 | orchestrator | 2025-06-02 17:49:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:19.989521 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state STARTED 2025-06-02 17:49:19.990636 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:19.991071 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:19.992265 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:19.992466 | orchestrator | 2025-06-02 17:49:19 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:19.993208 | orchestrator | 2025-06-02 17:49:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:23.042319 | orchestrator | 2025-06-02 17:49:23 | INFO  | Task fb6f7153-c4ed-4987-b14b-78d7afdc1a17 is in state SUCCESS 2025-06-02 17:49:23.043490 | orchestrator | 2025-06-02 17:49:23.043557 | orchestrator | 2025-06-02 17:49:23.043571 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-02 17:49:23.043583 | orchestrator | 2025-06-02 17:49:23.043594 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-02 17:49:23.043606 | orchestrator | Monday 02 June 2025 17:48:05 +0000 (0:00:00.245) 0:00:00.245 *********** 2025-06-02 17:49:23.043617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-02 17:49:23.043630 | orchestrator | 2025-06-02 17:49:23.043641 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-02 17:49:23.043652 | orchestrator | Monday 02 June 2025 17:48:05 +0000 (0:00:00.252) 0:00:00.498 *********** 2025-06-02 17:49:23.043663 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-02 17:49:23.043674 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-02 17:49:23.043685 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-02 17:49:23.043720 | orchestrator | 2025-06-02 17:49:23.043765 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-02 17:49:23.043785 | orchestrator | Monday 02 June 2025 17:48:07 +0000 (0:00:01.219) 0:00:01.717 *********** 2025-06-02 17:49:23.043805 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-02 17:49:23.043822 | orchestrator | 2025-06-02 17:49:23.043842 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-02 17:49:23.043860 | orchestrator | Monday 02 June 2025 17:48:08 +0000 (0:00:01.163) 0:00:02.880 *********** 2025-06-02 17:49:23.043878 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:23.043896 | orchestrator | 2025-06-02 17:49:23.043913 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-02 17:49:23.043929 | orchestrator | Monday 02 June 2025 17:48:09 +0000 (0:00:01.032) 0:00:03.913 *********** 2025-06-02 17:49:23.043947 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:23.043964 | orchestrator | 2025-06-02 17:49:23.043981 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-02 17:49:23.043998 | orchestrator | Monday 02 June 2025 17:48:10 +0000 (0:00:00.894) 0:00:04.808 *********** 2025-06-02 17:49:23.044016 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-02 17:49:23.044034 | orchestrator | ok: [testbed-manager] 2025-06-02 17:49:23.044051 | orchestrator | 2025-06-02 17:49:23.044068 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-02 17:49:23.044086 | orchestrator | Monday 02 June 2025 17:48:45 +0000 (0:00:35.335) 0:00:40.143 *********** 2025-06-02 17:49:23.044105 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-02 17:49:23.044124 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-02 17:49:23.044144 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-02 17:49:23.044893 | orchestrator | [0;33mchanged: [testbed-manager] => (item=radosgw-admin) 2025-06-02 17:49:23.044934 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-02 17:49:23.044952 | orchestrator | 2025-06-02 17:49:23.044971 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-02 17:49:23.044989 | orchestrator | Monday 02 June 2025 17:48:49 +0000 (0:00:03.658) 0:00:43.802 *********** 2025-06-02 17:49:23.045124 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-02 17:49:23.045144 | orchestrator | 2025-06-02 17:49:23.045163 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-02 17:49:23.045181 | orchestrator | Monday 02 June 2025 17:48:49 +0000 (0:00:00.440) 0:00:44.243 *********** 2025-06-02 17:49:23.045199 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:49:23.045217 | orchestrator | 2025-06-02 17:49:23.045235 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-02 17:49:23.045253 | orchestrator | Monday 02 June 2025 17:48:49 +0000 (0:00:00.114) 0:00:44.358 *********** 2025-06-02 17:49:23.045271 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:49:23.045289 | orchestrator | 2025-06-02 17:49:23.045306 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-02 17:49:23.045322 | orchestrator | Monday 02 June 2025 17:48:50 +0000 (0:00:00.293) 0:00:44.651 *********** 2025-06-02 17:49:23.045340 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:23.045355 | orchestrator | 2025-06-02 17:49:23.045370 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-02 17:49:23.045387 | orchestrator | Monday 02 June 2025 17:48:51 +0000 (0:00:01.632) 0:00:46.284 *********** 2025-06-02 17:49:23.045402 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:23.045418 | orchestrator | 2025-06-02 17:49:23.045452 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-02 17:49:23.045469 | orchestrator | Monday 02 June 2025 17:48:52 +0000 (0:00:00.623) 0:00:46.907 *********** 2025-06-02 17:49:23.045486 | orchestrator | changed: [testbed-manager] 2025-06-02 17:49:23.045501 | orchestrator | 2025-06-02 17:49:23.045538 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-02 17:49:23.045555 | orchestrator | Monday 02 June 2025 17:48:52 +0000 (0:00:00.526) 0:00:47.434 *********** 2025-06-02 17:49:23.045571 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-02 17:49:23.045587 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-02 17:49:23.045603 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-02 17:49:23.045619 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-02 17:49:23.045635 | orchestrator | 2025-06-02 17:49:23.045650 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:49:23.045667 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 17:49:23.045684 | orchestrator | 2025-06-02 17:49:23.045700 | orchestrator | 2025-06-02 17:49:23.045825 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:49:23.045849 | orchestrator | Monday 02 June 2025 17:48:54 +0000 (0:00:01.300) 0:00:48.734 *********** 2025-06-02 17:49:23.045866 | orchestrator | =============================================================================== 2025-06-02 17:49:23.045882 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.34s 2025-06-02 17:49:23.045897 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.66s 2025-06-02 17:49:23.045912 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.63s 2025-06-02 17:49:23.045929 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.30s 2025-06-02 17:49:23.045945 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.22s 2025-06-02 17:49:23.045961 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.16s 2025-06-02 17:49:23.045978 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.03s 2025-06-02 17:49:23.045994 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.89s 2025-06-02 17:49:23.046011 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.62s 2025-06-02 17:49:23.046089 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2025-06-02 17:49:23.046109 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.44s 2025-06-02 17:49:23.046127 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-02 17:49:23.046145 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2025-06-02 17:49:23.046161 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.11s 2025-06-02 17:49:23.046178 | orchestrator | 2025-06-02 17:49:23.046194 | orchestrator | 2025-06-02 17:49:23.046211 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:49:23.046227 | orchestrator | 2025-06-02 17:49:23.046243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:49:23.046258 | orchestrator | Monday 02 June 2025 17:48:58 +0000 (0:00:00.169) 0:00:00.169 *********** 2025-06-02 17:49:23.046274 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.046289 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.046306 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.046322 | orchestrator | 2025-06-02 17:49:23.046340 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:49:23.046458 | orchestrator | Monday 02 June 2025 17:48:58 +0000 (0:00:00.293) 0:00:00.462 *********** 2025-06-02 17:49:23.046476 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 17:49:23.046492 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 17:49:23.046508 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 17:49:23.046524 | orchestrator | 2025-06-02 17:49:23.046540 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-02 17:49:23.046556 | orchestrator | 2025-06-02 17:49:23.046572 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-02 17:49:23.046595 | orchestrator | Monday 02 June 2025 17:48:59 +0000 (0:00:00.570) 0:00:01.033 *********** 2025-06-02 17:49:23.046605 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.046614 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.046624 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.046634 | orchestrator | 2025-06-02 17:49:23.046643 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:49:23.046654 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:23.046665 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:23.046675 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:49:23.046684 | orchestrator | 2025-06-02 17:49:23.046694 | orchestrator | 2025-06-02 17:49:23.046703 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:49:23.046713 | orchestrator | Monday 02 June 2025 17:48:59 +0000 (0:00:00.730) 0:00:01.764 *********** 2025-06-02 17:49:23.046722 | orchestrator | =============================================================================== 2025-06-02 17:49:23.046763 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.73s 2025-06-02 17:49:23.046780 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-06-02 17:49:23.046808 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-02 17:49:23.046824 | orchestrator | 2025-06-02 17:49:23.046839 | orchestrator | 2025-06-02 17:49:23.046855 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:49:23.046871 | orchestrator | 2025-06-02 17:49:23.046886 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:49:23.046902 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.280) 0:00:00.280 *********** 2025-06-02 17:49:23.046919 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.046937 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.046952 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.046969 | orchestrator | 2025-06-02 17:49:23.046986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:49:23.047001 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.280) 0:00:00.560 *********** 2025-06-02 17:49:23.047018 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-02 17:49:23.047028 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-02 17:49:23.047038 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-02 17:49:23.047047 | orchestrator | 2025-06-02 17:49:23.047057 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-02 17:49:23.047066 | orchestrator | 2025-06-02 17:49:23.047142 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.047154 | orchestrator | Monday 02 June 2025 17:46:36 +0000 (0:00:00.439) 0:00:00.999 *********** 2025-06-02 17:49:23.047164 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:49:23.047174 | orchestrator | 2025-06-02 17:49:23.047183 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-02 17:49:23.047193 | orchestrator | Monday 02 June 2025 17:46:37 +0000 (0:00:00.548) 0:00:01.548 *********** 2025-06-02 17:49:23.047331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.047378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.047399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.047412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.047544 | orchestrator | 2025-06-02 17:49:23.047558 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-02 17:49:23.047573 | orchestrator | Monday 02 June 2025 17:46:39 +0000 (0:00:01.748) 0:00:03.297 *********** 2025-06-02 17:49:23.047606 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-02 17:49:23.047640 | orchestrator | 2025-06-02 17:49:23.047672 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-02 17:49:23.047702 | orchestrator | Monday 02 June 2025 17:46:39 +0000 (0:00:00.883) 0:00:04.180 *********** 2025-06-02 17:49:23.047822 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.047851 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.047875 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.047899 | orchestrator | 2025-06-02 17:49:23.047920 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-02 17:49:23.047944 | orchestrator | Monday 02 June 2025 17:46:40 +0000 (0:00:00.502) 0:00:04.683 *********** 2025-06-02 17:49:23.047967 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:49:23.047990 | orchestrator | 2025-06-02 17:49:23.048013 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.048037 | orchestrator | Monday 02 June 2025 17:46:41 +0000 (0:00:00.695) 0:00:05.378 *********** 2025-06-02 17:49:23.048060 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:49:23.048084 | orchestrator | 2025-06-02 17:49:23.048124 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-02 17:49:23.048167 | orchestrator | Monday 02 June 2025 17:46:41 +0000 (0:00:00.541) 0:00:05.919 *********** 2025-06-02 17:49:23.048192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048435 | orchestrator | 2025-06-02 17:49:23.048448 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-02 17:49:23.048459 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:03.432) 0:00:09.352 *********** 2025-06-02 17:49:23.048479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048548 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.048562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048610 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.048634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048682 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.048690 | orchestrator | 2025-06-02 17:49:23.048698 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-02 17:49:23.048706 | orchestrator | Monday 02 June 2025 17:46:45 +0000 (0:00:00.601) 0:00:09.953 *********** 2025-06-02 17:49:23.048715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048781 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.048797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.048831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-02 17:49:23.048851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.048865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-02 17:49:23.048874 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.048882 | orchestrator | 2025-06-02 17:49:23.048890 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-02 17:49:23.048898 | orchestrator | Monday 02 June 2025 17:46:46 +0000 (0:00:00.851) 0:00:10.805 *********** 2025-06-02 17:49:23.048906 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048915 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.048947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048964 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.048990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049003 | orchestrator | 2025-06-02 17:49:23.049011 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-02 17:49:23.049023 | orchestrator | Monday 02 June 2025 17:46:50 +0000 (0:00:03.581) 0:00:14.386 *********** 2025-06-02 17:49:23.049038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049130 | orchestrator | 2025-06-02 17:49:23.049138 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-02 17:49:23.049146 | orchestrator | Monday 02 June 2025 17:46:55 +0000 (0:00:05.380) 0:00:19.767 *********** 2025-06-02 17:49:23.049154 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.049162 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:49:23.049170 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:49:23.049178 | orchestrator | 2025-06-02 17:49:23.049190 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-02 17:49:23.049198 | orchestrator | Monday 02 June 2025 17:46:57 +0000 (0:00:01.497) 0:00:21.264 *********** 2025-06-02 17:49:23.049206 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.049214 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.049222 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.049229 | orchestrator | 2025-06-02 17:49:23.049237 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-02 17:49:23.049246 | orchestrator | Monday 02 June 2025 17:46:57 +0000 (0:00:00.530) 0:00:21.795 *********** 2025-06-02 17:49:23.049260 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.049274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.049293 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.049307 | orchestrator | 2025-06-02 17:49:23.049319 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-02 17:49:23.049331 | orchestrator | Monday 02 June 2025 17:46:58 +0000 (0:00:00.508) 0:00:22.303 *********** 2025-06-02 17:49:23.049345 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.049358 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.049370 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.049384 | orchestrator | 2025-06-02 17:49:23.049392 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-02 17:49:23.049400 | orchestrator | Monday 02 June 2025 17:46:58 +0000 (0:00:00.301) 0:00:22.605 *********** 2025-06-02 17:49:23.049414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.049476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-02 17:49:23.049492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.049571 | orchestrator | 2025-06-02 17:49:23.049579 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.049587 | orchestrator | Monday 02 June 2025 17:47:00 +0000 (0:00:02.308) 0:00:24.914 *********** 2025-06-02 17:49:23.049595 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.049603 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.049611 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.049619 | orchestrator | 2025-06-02 17:49:23.049626 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-02 17:49:23.049634 | orchestrator | Monday 02 June 2025 17:47:01 +0000 (0:00:00.298) 0:00:25.212 *********** 2025-06-02 17:49:23.049642 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:49:23.049651 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:49:23.049659 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-02 17:49:23.049667 | orchestrator | 2025-06-02 17:49:23.049674 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-02 17:49:23.049682 | orchestrator | Monday 02 June 2025 17:47:03 +0000 (0:00:02.060) 0:00:27.273 *********** 2025-06-02 17:49:23.049690 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:49:23.049698 | orchestrator | 2025-06-02 17:49:23.049705 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-02 17:49:23.049713 | orchestrator | Monday 02 June 2025 17:47:04 +0000 (0:00:00.925) 0:00:28.198 *********** 2025-06-02 17:49:23.049721 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.049748 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.049757 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.049764 | orchestrator | 2025-06-02 17:49:23.049772 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-02 17:49:23.049780 | orchestrator | Monday 02 June 2025 17:47:04 +0000 (0:00:00.548) 0:00:28.746 *********** 2025-06-02 17:49:23.049788 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:49:23.049795 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:49:23.049803 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:49:23.049811 | orchestrator | 2025-06-02 17:49:23.049819 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-02 17:49:23.049831 | orchestrator | Monday 02 June 2025 17:47:05 +0000 (0:00:01.071) 0:00:29.818 *********** 2025-06-02 17:49:23.049839 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.049847 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.049855 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.049863 | orchestrator | 2025-06-02 17:49:23.049871 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-02 17:49:23.049879 | orchestrator | Monday 02 June 2025 17:47:05 +0000 (0:00:00.298) 0:00:30.117 *********** 2025-06-02 17:49:23.049887 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:49:23.049895 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:49:23.049902 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-02 17:49:23.049910 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:49:23.049918 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:49:23.049931 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-02 17:49:23.049944 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:49:23.049953 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:49:23.049960 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-02 17:49:23.049968 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:49:23.049976 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:49:23.049984 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-02 17:49:23.049992 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:49:23.050000 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:49:23.050007 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-02 17:49:23.050064 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:49:23.050075 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:49:23.050083 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:49:23.050091 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:49:23.050099 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:49:23.050107 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:49:23.050114 | orchestrator | 2025-06-02 17:49:23.050122 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-02 17:49:23.050252 | orchestrator | Monday 02 June 2025 17:47:14 +0000 (0:00:08.989) 0:00:39.107 *********** 2025-06-02 17:49:23.050266 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:49:23.050279 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:49:23.050291 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:49:23.050305 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:49:23.050317 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:49:23.050325 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:49:23.050332 | orchestrator | 2025-06-02 17:49:23.050340 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-02 17:49:23.050348 | orchestrator | Monday 02 June 2025 17:47:17 +0000 (0:00:02.594) 0:00:41.701 *********** 2025-06-02 17:49:23.050362 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.050389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.050399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-02 17:49:23.050408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050465 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-02 17:49:23.050473 | orchestrator | 2025-06-02 17:49:23.050481 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.050489 | orchestrator | Monday 02 June 2025 17:47:19 +0000 (0:00:02.348) 0:00:44.049 *********** 2025-06-02 17:49:23.050497 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.050505 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.050513 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.050520 | orchestrator | 2025-06-02 17:49:23.050528 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-02 17:49:23.050536 | orchestrator | Monday 02 June 2025 17:47:20 +0000 (0:00:00.316) 0:00:44.366 *********** 2025-06-02 17:49:23.050544 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.050551 | orchestrator | 2025-06-02 17:49:23.050559 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-02 17:49:23.050567 | orchestrator | Monday 02 June 2025 17:47:22 +0000 (0:00:02.209) 0:00:46.575 *********** 2025-06-02 17:49:23.050575 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.050582 | orchestrator | 2025-06-02 17:49:23.050590 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-02 17:49:23.050598 | orchestrator | Monday 02 June 2025 17:47:25 +0000 (0:00:02.672) 0:00:49.248 *********** 2025-06-02 17:49:23.050606 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.050613 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.050621 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.050629 | orchestrator | 2025-06-02 17:49:23.050637 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-02 17:49:23.050645 | orchestrator | Monday 02 June 2025 17:47:25 +0000 (0:00:00.822) 0:00:50.070 *********** 2025-06-02 17:49:23.050652 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.050660 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.050668 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.050675 | orchestrator | 2025-06-02 17:49:23.050683 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-02 17:49:23.050696 | orchestrator | Monday 02 June 2025 17:47:26 +0000 (0:00:00.373) 0:00:50.443 *********** 2025-06-02 17:49:23.050704 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.050712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.050719 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.050781 | orchestrator | 2025-06-02 17:49:23.050792 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-02 17:49:23.050800 | orchestrator | Monday 02 June 2025 17:47:26 +0000 (0:00:00.352) 0:00:50.796 *********** 2025-06-02 17:49:23.050807 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.050815 | orchestrator | 2025-06-02 17:49:23.050823 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-02 17:49:23.050831 | orchestrator | Monday 02 June 2025 17:47:40 +0000 (0:00:14.333) 0:01:05.129 *********** 2025-06-02 17:49:23.050838 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.050846 | orchestrator | 2025-06-02 17:49:23.050854 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:49:23.050862 | orchestrator | Monday 02 June 2025 17:47:50 +0000 (0:00:10.030) 0:01:15.159 *********** 2025-06-02 17:49:23.050870 | orchestrator | 2025-06-02 17:49:23.050879 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:49:23.050888 | orchestrator | Monday 02 June 2025 17:47:51 +0000 (0:00:00.263) 0:01:15.423 *********** 2025-06-02 17:49:23.050898 | orchestrator | 2025-06-02 17:49:23.050907 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-02 17:49:23.050920 | orchestrator | Monday 02 June 2025 17:47:51 +0000 (0:00:00.067) 0:01:15.491 *********** 2025-06-02 17:49:23.050929 | orchestrator | 2025-06-02 17:49:23.050938 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-02 17:49:23.050947 | orchestrator | Monday 02 June 2025 17:47:51 +0000 (0:00:00.064) 0:01:15.555 *********** 2025-06-02 17:49:23.050957 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.050966 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:49:23.050975 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:49:23.050984 | orchestrator | 2025-06-02 17:49:23.050993 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-02 17:49:23.051003 | orchestrator | Monday 02 June 2025 17:48:15 +0000 (0:00:24.438) 0:01:39.993 *********** 2025-06-02 17:49:23.051012 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:49:23.051021 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:49:23.051030 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.051039 | orchestrator | 2025-06-02 17:49:23.051049 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-02 17:49:23.051058 | orchestrator | Monday 02 June 2025 17:48:23 +0000 (0:00:07.467) 0:01:47.461 *********** 2025-06-02 17:49:23.051067 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.051076 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:49:23.051090 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:49:23.051100 | orchestrator | 2025-06-02 17:49:23.051109 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.051118 | orchestrator | Monday 02 June 2025 17:48:29 +0000 (0:00:06.233) 0:01:53.695 *********** 2025-06-02 17:49:23.051128 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:49:23.051137 | orchestrator | 2025-06-02 17:49:23.051146 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-02 17:49:23.051155 | orchestrator | Monday 02 June 2025 17:48:30 +0000 (0:00:00.622) 0:01:54.317 *********** 2025-06-02 17:49:23.051165 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:49:23.051174 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.051184 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:49:23.051193 | orchestrator | 2025-06-02 17:49:23.051202 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-02 17:49:23.051212 | orchestrator | Monday 02 June 2025 17:48:30 +0000 (0:00:00.677) 0:01:54.994 *********** 2025-06-02 17:49:23.051231 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:49:23.051240 | orchestrator | 2025-06-02 17:49:23.051253 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-02 17:49:23.051266 | orchestrator | Monday 02 June 2025 17:48:32 +0000 (0:00:01.724) 0:01:56.719 *********** 2025-06-02 17:49:23.051279 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-02 17:49:23.051292 | orchestrator | 2025-06-02 17:49:23.051305 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-02 17:49:23.051319 | orchestrator | Monday 02 June 2025 17:48:42 +0000 (0:00:09.619) 0:02:06.339 *********** 2025-06-02 17:49:23.051332 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-02 17:49:23.051345 | orchestrator | 2025-06-02 17:49:23.051359 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-02 17:49:23.051372 | orchestrator | Monday 02 June 2025 17:49:02 +0000 (0:00:20.251) 0:02:26.590 *********** 2025-06-02 17:49:23.051381 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-02 17:49:23.051390 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-02 17:49:23.051397 | orchestrator | 2025-06-02 17:49:23.051405 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-02 17:49:23.051413 | orchestrator | Monday 02 June 2025 17:49:16 +0000 (0:00:14.141) 0:02:40.732 *********** 2025-06-02 17:49:23.051421 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.051429 | orchestrator | 2025-06-02 17:49:23.051436 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-02 17:49:23.051444 | orchestrator | Monday 02 June 2025 17:49:17 +0000 (0:00:00.734) 0:02:41.466 *********** 2025-06-02 17:49:23.051452 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.051460 | orchestrator | 2025-06-02 17:49:23.051467 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-02 17:49:23.051475 | orchestrator | Monday 02 June 2025 17:49:17 +0000 (0:00:00.169) 0:02:41.636 *********** 2025-06-02 17:49:23.051483 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.051491 | orchestrator | 2025-06-02 17:49:23.051498 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-02 17:49:23.051506 | orchestrator | Monday 02 June 2025 17:49:17 +0000 (0:00:00.167) 0:02:41.803 *********** 2025-06-02 17:49:23.051514 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.051521 | orchestrator | 2025-06-02 17:49:23.051529 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-02 17:49:23.051537 | orchestrator | Monday 02 June 2025 17:49:17 +0000 (0:00:00.312) 0:02:42.116 *********** 2025-06-02 17:49:23.051545 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:49:23.051553 | orchestrator | 2025-06-02 17:49:23.051560 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-02 17:49:23.051568 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:03.401) 0:02:45.517 *********** 2025-06-02 17:49:23.051576 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:49:23.051584 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:49:23.051591 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:49:23.051599 | orchestrator | 2025-06-02 17:49:23.051607 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:49:23.051615 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-02 17:49:23.051630 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 17:49:23.051638 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-02 17:49:23.051646 | orchestrator | 2025-06-02 17:49:23.051654 | orchestrator | 2025-06-02 17:49:23.051669 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:49:23.051676 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:00.510) 0:02:46.028 *********** 2025-06-02 17:49:23.051684 | orchestrator | =============================================================================== 2025-06-02 17:49:23.051692 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 24.44s 2025-06-02 17:49:23.051700 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.25s 2025-06-02 17:49:23.051707 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.33s 2025-06-02 17:49:23.051715 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 14.14s 2025-06-02 17:49:23.051723 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.03s 2025-06-02 17:49:23.051758 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.62s 2025-06-02 17:49:23.051767 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.99s 2025-06-02 17:49:23.051775 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.47s 2025-06-02 17:49:23.051782 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.23s 2025-06-02 17:49:23.051790 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.38s 2025-06-02 17:49:23.051798 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.58s 2025-06-02 17:49:23.051805 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.43s 2025-06-02 17:49:23.051813 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2025-06-02 17:49:23.051821 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.67s 2025-06-02 17:49:23.051828 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.59s 2025-06-02 17:49:23.051836 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.35s 2025-06-02 17:49:23.051844 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.31s 2025-06-02 17:49:23.051851 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.21s 2025-06-02 17:49:23.051859 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 2.06s 2025-06-02 17:49:23.051867 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.75s 2025-06-02 17:49:23.051875 | orchestrator | 2025-06-02 17:49:23 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:23.051883 | orchestrator | 2025-06-02 17:49:23 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:23.051891 | orchestrator | 2025-06-02 17:49:23 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:23.052824 | orchestrator | 2025-06-02 17:49:23 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:23.053430 | orchestrator | 2025-06-02 17:49:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:26.094550 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:26.094645 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:26.096279 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:26.096340 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:26.096350 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:26.096359 | orchestrator | 2025-06-02 17:49:26 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:26.096393 | orchestrator | 2025-06-02 17:49:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:29.129890 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:29.130416 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:29.131539 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:29.132280 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:29.132958 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:29.133570 | orchestrator | 2025-06-02 17:49:29 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:29.133752 | orchestrator | 2025-06-02 17:49:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:32.181806 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:32.183789 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:32.184351 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:32.188896 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:32.189401 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:32.190269 | orchestrator | 2025-06-02 17:49:32 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:32.190307 | orchestrator | 2025-06-02 17:49:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:35.237420 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:35.238189 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:35.239645 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:35.241344 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:35.242343 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:35.243729 | orchestrator | 2025-06-02 17:49:35 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:35.243833 | orchestrator | 2025-06-02 17:49:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:38.284312 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:38.287141 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:38.290897 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:38.291238 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:38.292061 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:38.292710 | orchestrator | 2025-06-02 17:49:38 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:38.292804 | orchestrator | 2025-06-02 17:49:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:41.332653 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:41.333323 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state STARTED 2025-06-02 17:49:41.334617 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:41.334701 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state STARTED 2025-06-02 17:49:41.337372 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:41.338178 | orchestrator | 2025-06-02 17:49:41 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:41.338229 | orchestrator | 2025-06-02 17:49:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:44.367363 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:44.368392 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task d14fcf9d-774a-4490-980f-a80f9b5c1738 is in state SUCCESS 2025-06-02 17:49:44.370282 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:44.371237 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task 7c41b57d-d1c6-43b6-ba1c-a7c0e49d4bd9 is in state SUCCESS 2025-06-02 17:49:44.372954 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:44.375011 | orchestrator | 2025-06-02 17:49:44 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:44.375057 | orchestrator | 2025-06-02 17:49:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:47.416675 | orchestrator | 2025-06-02 17:49:47 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:47.416968 | orchestrator | 2025-06-02 17:49:47 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:47.416994 | orchestrator | 2025-06-02 17:49:47 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:49:47.417007 | orchestrator | 2025-06-02 17:49:47 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:47.417032 | orchestrator | 2025-06-02 17:49:47 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:47.417043 | orchestrator | 2025-06-02 17:49:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:50.452456 | orchestrator | 2025-06-02 17:49:50 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:50.452815 | orchestrator | 2025-06-02 17:49:50 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:50.457401 | orchestrator | 2025-06-02 17:49:50 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:49:50.459524 | orchestrator | 2025-06-02 17:49:50 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:50.461379 | orchestrator | 2025-06-02 17:49:50 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:50.461420 | orchestrator | 2025-06-02 17:49:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:53.506220 | orchestrator | 2025-06-02 17:49:53 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:53.507698 | orchestrator | 2025-06-02 17:49:53 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:53.510451 | orchestrator | 2025-06-02 17:49:53 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:49:53.511563 | orchestrator | 2025-06-02 17:49:53 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:53.513028 | orchestrator | 2025-06-02 17:49:53 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:53.513355 | orchestrator | 2025-06-02 17:49:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:56.545998 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:56.546876 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:56.547487 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:49:56.548653 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:56.551291 | orchestrator | 2025-06-02 17:49:56 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:56.551365 | orchestrator | 2025-06-02 17:49:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:49:59.618433 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:49:59.618514 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:49:59.618524 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:49:59.618531 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:49:59.619757 | orchestrator | 2025-06-02 17:49:59 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:49:59.619882 | orchestrator | 2025-06-02 17:49:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:02.657501 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:02.659163 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:02.659206 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:02.659384 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:02.660174 | orchestrator | 2025-06-02 17:50:02 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:02.660192 | orchestrator | 2025-06-02 17:50:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:05.705530 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:05.706677 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:05.709113 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:05.709604 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:05.711593 | orchestrator | 2025-06-02 17:50:05 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:05.711656 | orchestrator | 2025-06-02 17:50:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:08.752077 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:08.752588 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:08.753752 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:08.756341 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:08.758224 | orchestrator | 2025-06-02 17:50:08 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:08.758266 | orchestrator | 2025-06-02 17:50:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:11.806782 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:11.807628 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:11.809165 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:11.810303 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:11.811656 | orchestrator | 2025-06-02 17:50:11 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:11.811869 | orchestrator | 2025-06-02 17:50:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:14.843711 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:14.844448 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:14.845171 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:14.846332 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:14.847098 | orchestrator | 2025-06-02 17:50:14 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:14.847338 | orchestrator | 2025-06-02 17:50:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:17.889216 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:17.890499 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:17.892785 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:17.894308 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:17.895431 | orchestrator | 2025-06-02 17:50:17 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:17.895585 | orchestrator | 2025-06-02 17:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:20.933648 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state STARTED 2025-06-02 17:50:20.934883 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:20.937155 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:20.937596 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:20.938491 | orchestrator | 2025-06-02 17:50:20 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:20.938520 | orchestrator | 2025-06-02 17:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:23.981506 | orchestrator | 2025-06-02 17:50:23.981596 | orchestrator | 2025-06-02 17:50:23.981608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:50:23.981618 | orchestrator | 2025-06-02 17:50:23.981627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:50:23.981635 | orchestrator | Monday 02 June 2025 17:49:05 +0000 (0:00:00.310) 0:00:00.310 *********** 2025-06-02 17:50:23.981643 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:50:23.981652 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:50:23.981660 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:50:23.981668 | orchestrator | ok: [testbed-manager] 2025-06-02 17:50:23.981676 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:50:23.981684 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:50:23.981691 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:50:23.981699 | orchestrator | 2025-06-02 17:50:23.981707 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:50:23.981716 | orchestrator | Monday 02 June 2025 17:49:06 +0000 (0:00:01.025) 0:00:01.336 *********** 2025-06-02 17:50:23.981724 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981732 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981740 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981748 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981756 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981763 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981891 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-02 17:50:23.981904 | orchestrator | 2025-06-02 17:50:23.981912 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-02 17:50:23.981924 | orchestrator | 2025-06-02 17:50:23.981938 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-02 17:50:23.981950 | orchestrator | Monday 02 June 2025 17:49:08 +0000 (0:00:01.769) 0:00:03.105 *********** 2025-06-02 17:50:23.981964 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:50:23.981978 | orchestrator | 2025-06-02 17:50:23.981991 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-02 17:50:23.982004 | orchestrator | Monday 02 June 2025 17:49:10 +0000 (0:00:01.876) 0:00:04.981 *********** 2025-06-02 17:50:23.982075 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-06-02 17:50:23.982085 | orchestrator | 2025-06-02 17:50:23.982093 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-02 17:50:23.982101 | orchestrator | Monday 02 June 2025 17:49:14 +0000 (0:00:04.233) 0:00:09.214 *********** 2025-06-02 17:50:23.982109 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-02 17:50:23.982120 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-02 17:50:23.982128 | orchestrator | 2025-06-02 17:50:23.982136 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-02 17:50:23.982143 | orchestrator | Monday 02 June 2025 17:49:21 +0000 (0:00:07.118) 0:00:16.333 *********** 2025-06-02 17:50:23.982151 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:50:23.982159 | orchestrator | 2025-06-02 17:50:23.982167 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-02 17:50:23.982175 | orchestrator | Monday 02 June 2025 17:49:25 +0000 (0:00:03.656) 0:00:19.989 *********** 2025-06-02 17:50:23.982206 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:50:23.982215 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-06-02 17:50:23.982223 | orchestrator | 2025-06-02 17:50:23.982230 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-02 17:50:23.982238 | orchestrator | Monday 02 June 2025 17:49:29 +0000 (0:00:04.105) 0:00:24.095 *********** 2025-06-02 17:50:23.982246 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:50:23.982254 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-06-02 17:50:23.982262 | orchestrator | 2025-06-02 17:50:23.982269 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-02 17:50:23.982277 | orchestrator | Monday 02 June 2025 17:49:36 +0000 (0:00:06.844) 0:00:30.940 *********** 2025-06-02 17:50:23.982285 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-06-02 17:50:23.982292 | orchestrator | 2025-06-02 17:50:23.982300 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:50:23.982308 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982316 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982325 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982345 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982354 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982379 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982387 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.982395 | orchestrator | 2025-06-02 17:50:23.982403 | orchestrator | 2025-06-02 17:50:23.982411 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:50:23.982419 | orchestrator | Monday 02 June 2025 17:49:42 +0000 (0:00:06.174) 0:00:37.115 *********** 2025-06-02 17:50:23.982427 | orchestrator | =============================================================================== 2025-06-02 17:50:23.982434 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.12s 2025-06-02 17:50:23.982442 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.84s 2025-06-02 17:50:23.982450 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.17s 2025-06-02 17:50:23.982457 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 4.23s 2025-06-02 17:50:23.982465 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.11s 2025-06-02 17:50:23.982474 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.66s 2025-06-02 17:50:23.982482 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.88s 2025-06-02 17:50:23.982491 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.77s 2025-06-02 17:50:23.982500 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.03s 2025-06-02 17:50:23.982509 | orchestrator | 2025-06-02 17:50:23.982518 | orchestrator | None 2025-06-02 17:50:23.982528 | orchestrator | 2025-06-02 17:50:23.982537 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-02 17:50:23.982546 | orchestrator | 2025-06-02 17:50:23.982555 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-02 17:50:23.982570 | orchestrator | Monday 02 June 2025 17:48:58 +0000 (0:00:00.273) 0:00:00.273 *********** 2025-06-02 17:50:23.982579 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982588 | orchestrator | 2025-06-02 17:50:23.982597 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-02 17:50:23.982606 | orchestrator | Monday 02 June 2025 17:48:59 +0000 (0:00:01.225) 0:00:01.498 *********** 2025-06-02 17:50:23.982614 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982719 | orchestrator | 2025-06-02 17:50:23.982729 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-02 17:50:23.982738 | orchestrator | Monday 02 June 2025 17:49:00 +0000 (0:00:00.934) 0:00:02.433 *********** 2025-06-02 17:50:23.982747 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982756 | orchestrator | 2025-06-02 17:50:23.982765 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-02 17:50:23.982775 | orchestrator | Monday 02 June 2025 17:49:01 +0000 (0:00:00.970) 0:00:03.404 *********** 2025-06-02 17:50:23.982784 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982792 | orchestrator | 2025-06-02 17:50:23.982800 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-02 17:50:23.982807 | orchestrator | Monday 02 June 2025 17:49:02 +0000 (0:00:01.310) 0:00:04.714 *********** 2025-06-02 17:50:23.982838 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982852 | orchestrator | 2025-06-02 17:50:23.982863 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-02 17:50:23.982871 | orchestrator | Monday 02 June 2025 17:49:04 +0000 (0:00:01.283) 0:00:05.998 *********** 2025-06-02 17:50:23.982878 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982886 | orchestrator | 2025-06-02 17:50:23.982894 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-02 17:50:23.982902 | orchestrator | Monday 02 June 2025 17:49:05 +0000 (0:00:01.074) 0:00:07.072 *********** 2025-06-02 17:50:23.982909 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982920 | orchestrator | 2025-06-02 17:50:23.982933 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-02 17:50:23.982946 | orchestrator | Monday 02 June 2025 17:49:07 +0000 (0:00:02.102) 0:00:09.175 *********** 2025-06-02 17:50:23.982959 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.982971 | orchestrator | 2025-06-02 17:50:23.982985 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-02 17:50:23.982999 | orchestrator | Monday 02 June 2025 17:49:08 +0000 (0:00:01.192) 0:00:10.367 *********** 2025-06-02 17:50:23.983013 | orchestrator | changed: [testbed-manager] 2025-06-02 17:50:23.983023 | orchestrator | 2025-06-02 17:50:23.983031 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-02 17:50:23.983039 | orchestrator | Monday 02 June 2025 17:49:57 +0000 (0:00:48.949) 0:00:59.317 *********** 2025-06-02 17:50:23.983047 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:50:23.983054 | orchestrator | 2025-06-02 17:50:23.983062 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:50:23.983070 | orchestrator | 2025-06-02 17:50:23.983078 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:50:23.983085 | orchestrator | Monday 02 June 2025 17:49:57 +0000 (0:00:00.191) 0:00:59.509 *********** 2025-06-02 17:50:23.983093 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:50:23.983101 | orchestrator | 2025-06-02 17:50:23.983109 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:50:23.983117 | orchestrator | 2025-06-02 17:50:23.983125 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:50:23.983133 | orchestrator | Monday 02 June 2025 17:50:09 +0000 (0:00:11.686) 0:01:11.195 *********** 2025-06-02 17:50:23.983140 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:50:23.983148 | orchestrator | 2025-06-02 17:50:23.983163 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-02 17:50:23.983171 | orchestrator | 2025-06-02 17:50:23.983187 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-02 17:50:23.983194 | orchestrator | Monday 02 June 2025 17:50:20 +0000 (0:00:11.269) 0:01:22.464 *********** 2025-06-02 17:50:23.983202 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:50:23.983210 | orchestrator | 2025-06-02 17:50:23.983227 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:50:23.983235 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-02 17:50:23.983243 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.983251 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.983259 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:50:23.983267 | orchestrator | 2025-06-02 17:50:23.983275 | orchestrator | 2025-06-02 17:50:23.983295 | orchestrator | 2025-06-02 17:50:23.983303 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:50:23.983311 | orchestrator | Monday 02 June 2025 17:50:21 +0000 (0:00:01.165) 0:01:23.629 *********** 2025-06-02 17:50:23.983318 | orchestrator | =============================================================================== 2025-06-02 17:50:23.983326 | orchestrator | Create admin user ------------------------------------------------------ 48.95s 2025-06-02 17:50:23.983334 | orchestrator | Restart ceph manager service ------------------------------------------- 24.12s 2025-06-02 17:50:23.983342 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.10s 2025-06-02 17:50:23.983350 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.31s 2025-06-02 17:50:23.983357 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.28s 2025-06-02 17:50:23.983365 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.23s 2025-06-02 17:50:23.983373 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.19s 2025-06-02 17:50:23.983381 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.07s 2025-06-02 17:50:23.983388 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.97s 2025-06-02 17:50:23.983396 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-06-02 17:50:23.983404 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.19s 2025-06-02 17:50:23.983412 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task f04f7796-ef2f-4721-b500-79c4d1d93276 is in state SUCCESS 2025-06-02 17:50:23.983420 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:23.983428 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:23.983436 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:23.983958 | orchestrator | 2025-06-02 17:50:23 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:23.983986 | orchestrator | 2025-06-02 17:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:27.031748 | orchestrator | 2025-06-02 17:50:27 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:27.037692 | orchestrator | 2025-06-02 17:50:27 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:27.039606 | orchestrator | 2025-06-02 17:50:27 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:27.041205 | orchestrator | 2025-06-02 17:50:27 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:27.041581 | orchestrator | 2025-06-02 17:50:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:30.078992 | orchestrator | 2025-06-02 17:50:30 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:30.079067 | orchestrator | 2025-06-02 17:50:30 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:30.080467 | orchestrator | 2025-06-02 17:50:30 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:30.080522 | orchestrator | 2025-06-02 17:50:30 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:30.080529 | orchestrator | 2025-06-02 17:50:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:33.123927 | orchestrator | 2025-06-02 17:50:33 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:33.124221 | orchestrator | 2025-06-02 17:50:33 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:33.124942 | orchestrator | 2025-06-02 17:50:33 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:33.125470 | orchestrator | 2025-06-02 17:50:33 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:33.125501 | orchestrator | 2025-06-02 17:50:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:36.169958 | orchestrator | 2025-06-02 17:50:36 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:36.170311 | orchestrator | 2025-06-02 17:50:36 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:36.171256 | orchestrator | 2025-06-02 17:50:36 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:36.171951 | orchestrator | 2025-06-02 17:50:36 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:36.171985 | orchestrator | 2025-06-02 17:50:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:39.205239 | orchestrator | 2025-06-02 17:50:39 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:39.205441 | orchestrator | 2025-06-02 17:50:39 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:39.206638 | orchestrator | 2025-06-02 17:50:39 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:39.207437 | orchestrator | 2025-06-02 17:50:39 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:39.207476 | orchestrator | 2025-06-02 17:50:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:42.236385 | orchestrator | 2025-06-02 17:50:42 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:42.237062 | orchestrator | 2025-06-02 17:50:42 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:42.239008 | orchestrator | 2025-06-02 17:50:42 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:42.240304 | orchestrator | 2025-06-02 17:50:42 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:42.240352 | orchestrator | 2025-06-02 17:50:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:45.302306 | orchestrator | 2025-06-02 17:50:45 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:45.302394 | orchestrator | 2025-06-02 17:50:45 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:45.302941 | orchestrator | 2025-06-02 17:50:45 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:45.303642 | orchestrator | 2025-06-02 17:50:45 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:45.303707 | orchestrator | 2025-06-02 17:50:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:48.334650 | orchestrator | 2025-06-02 17:50:48 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:48.334777 | orchestrator | 2025-06-02 17:50:48 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:48.334802 | orchestrator | 2025-06-02 17:50:48 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:48.334821 | orchestrator | 2025-06-02 17:50:48 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:48.334840 | orchestrator | 2025-06-02 17:50:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:51.358266 | orchestrator | 2025-06-02 17:50:51 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:51.358434 | orchestrator | 2025-06-02 17:50:51 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:51.358456 | orchestrator | 2025-06-02 17:50:51 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:51.359168 | orchestrator | 2025-06-02 17:50:51 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:51.359213 | orchestrator | 2025-06-02 17:50:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:54.396047 | orchestrator | 2025-06-02 17:50:54 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:54.399343 | orchestrator | 2025-06-02 17:50:54 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:54.399415 | orchestrator | 2025-06-02 17:50:54 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:54.399692 | orchestrator | 2025-06-02 17:50:54 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:54.399705 | orchestrator | 2025-06-02 17:50:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:50:57.444658 | orchestrator | 2025-06-02 17:50:57 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:50:57.444769 | orchestrator | 2025-06-02 17:50:57 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:50:57.445026 | orchestrator | 2025-06-02 17:50:57 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:50:57.448634 | orchestrator | 2025-06-02 17:50:57 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:50:57.448699 | orchestrator | 2025-06-02 17:50:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:00.485804 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:00.486601 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:00.489902 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:00.490360 | orchestrator | 2025-06-02 17:51:00 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:00.490388 | orchestrator | 2025-06-02 17:51:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:03.530925 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:03.532702 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:03.535361 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:03.537738 | orchestrator | 2025-06-02 17:51:03 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:03.537759 | orchestrator | 2025-06-02 17:51:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:06.613604 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:06.615232 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:06.617225 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:06.619474 | orchestrator | 2025-06-02 17:51:06 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:06.619852 | orchestrator | 2025-06-02 17:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:09.662630 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:09.662701 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:09.663193 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:09.663821 | orchestrator | 2025-06-02 17:51:09 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:09.663829 | orchestrator | 2025-06-02 17:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:12.703006 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:12.705312 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:12.706343 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:12.711053 | orchestrator | 2025-06-02 17:51:12 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:12.711096 | orchestrator | 2025-06-02 17:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:15.763301 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:15.763864 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:15.765669 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:15.767333 | orchestrator | 2025-06-02 17:51:15 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:15.767746 | orchestrator | 2025-06-02 17:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:18.810319 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:18.812165 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:18.813220 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:18.814684 | orchestrator | 2025-06-02 17:51:18 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:18.814798 | orchestrator | 2025-06-02 17:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:21.860839 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:21.866541 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:21.866640 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:21.866653 | orchestrator | 2025-06-02 17:51:21 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:21.866665 | orchestrator | 2025-06-02 17:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:24.909965 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:24.911304 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:24.913004 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:24.914748 | orchestrator | 2025-06-02 17:51:24 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:24.914764 | orchestrator | 2025-06-02 17:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:27.967758 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:27.969970 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:27.972386 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:27.975248 | orchestrator | 2025-06-02 17:51:27 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:27.975289 | orchestrator | 2025-06-02 17:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:31.030672 | orchestrator | 2025-06-02 17:51:31 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:31.031418 | orchestrator | 2025-06-02 17:51:31 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:31.033312 | orchestrator | 2025-06-02 17:51:31 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:31.035923 | orchestrator | 2025-06-02 17:51:31 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:31.036058 | orchestrator | 2025-06-02 17:51:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:34.085152 | orchestrator | 2025-06-02 17:51:34 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:34.086119 | orchestrator | 2025-06-02 17:51:34 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:34.086762 | orchestrator | 2025-06-02 17:51:34 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:34.087649 | orchestrator | 2025-06-02 17:51:34 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:34.087677 | orchestrator | 2025-06-02 17:51:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:37.149803 | orchestrator | 2025-06-02 17:51:37 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:37.153110 | orchestrator | 2025-06-02 17:51:37 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:37.155360 | orchestrator | 2025-06-02 17:51:37 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:37.157508 | orchestrator | 2025-06-02 17:51:37 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:37.157551 | orchestrator | 2025-06-02 17:51:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:40.198128 | orchestrator | 2025-06-02 17:51:40 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:40.207690 | orchestrator | 2025-06-02 17:51:40 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:40.207771 | orchestrator | 2025-06-02 17:51:40 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:40.207792 | orchestrator | 2025-06-02 17:51:40 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:40.207799 | orchestrator | 2025-06-02 17:51:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:43.240682 | orchestrator | 2025-06-02 17:51:43 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:43.242311 | orchestrator | 2025-06-02 17:51:43 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:43.242587 | orchestrator | 2025-06-02 17:51:43 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:43.244059 | orchestrator | 2025-06-02 17:51:43 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:43.244110 | orchestrator | 2025-06-02 17:51:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:46.282143 | orchestrator | 2025-06-02 17:51:46 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:46.283396 | orchestrator | 2025-06-02 17:51:46 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:46.284028 | orchestrator | 2025-06-02 17:51:46 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:46.285227 | orchestrator | 2025-06-02 17:51:46 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:46.285314 | orchestrator | 2025-06-02 17:51:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:49.329051 | orchestrator | 2025-06-02 17:51:49 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:49.329123 | orchestrator | 2025-06-02 17:51:49 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:49.329128 | orchestrator | 2025-06-02 17:51:49 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:49.329396 | orchestrator | 2025-06-02 17:51:49 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:49.329484 | orchestrator | 2025-06-02 17:51:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:52.376593 | orchestrator | 2025-06-02 17:51:52 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:52.378207 | orchestrator | 2025-06-02 17:51:52 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:52.378361 | orchestrator | 2025-06-02 17:51:52 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:52.380477 | orchestrator | 2025-06-02 17:51:52 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:52.380537 | orchestrator | 2025-06-02 17:51:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:55.424962 | orchestrator | 2025-06-02 17:51:55 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:55.426906 | orchestrator | 2025-06-02 17:51:55 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:55.428616 | orchestrator | 2025-06-02 17:51:55 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:55.431260 | orchestrator | 2025-06-02 17:51:55 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:55.431310 | orchestrator | 2025-06-02 17:51:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:51:58.461932 | orchestrator | 2025-06-02 17:51:58 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:51:58.462155 | orchestrator | 2025-06-02 17:51:58 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:51:58.462171 | orchestrator | 2025-06-02 17:51:58 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:51:58.463063 | orchestrator | 2025-06-02 17:51:58 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:51:58.463110 | orchestrator | 2025-06-02 17:51:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:01.494200 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:01.496966 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:01.497911 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:01.499043 | orchestrator | 2025-06-02 17:52:01 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:01.499091 | orchestrator | 2025-06-02 17:52:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:04.536674 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:04.536768 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:04.537575 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:04.538274 | orchestrator | 2025-06-02 17:52:04 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:04.538340 | orchestrator | 2025-06-02 17:52:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:07.566231 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:07.566714 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:07.566981 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:07.567766 | orchestrator | 2025-06-02 17:52:07 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:07.567802 | orchestrator | 2025-06-02 17:52:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:10.613574 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:10.614876 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:10.617051 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:10.619086 | orchestrator | 2025-06-02 17:52:10 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:10.619744 | orchestrator | 2025-06-02 17:52:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:13.658897 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:13.660733 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:13.661821 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:13.662842 | orchestrator | 2025-06-02 17:52:13 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:13.662888 | orchestrator | 2025-06-02 17:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:16.714305 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:16.718745 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:16.722216 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:16.724739 | orchestrator | 2025-06-02 17:52:16 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:16.724786 | orchestrator | 2025-06-02 17:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:19.770945 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state STARTED 2025-06-02 17:52:19.773388 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:19.776701 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:19.779227 | orchestrator | 2025-06-02 17:52:19 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:19.779299 | orchestrator | 2025-06-02 17:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:22.823566 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task d146b7dd-864f-4471-a696-b050939b60c6 is in state SUCCESS 2025-06-02 17:52:22.824594 | orchestrator | 2025-06-02 17:52:22.824674 | orchestrator | 2025-06-02 17:52:22.824687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:52:22.824697 | orchestrator | 2025-06-02 17:52:22.824707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:52:22.824718 | orchestrator | Monday 02 June 2025 17:49:05 +0000 (0:00:00.298) 0:00:00.298 *********** 2025-06-02 17:52:22.824728 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:52:22.824739 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:52:22.824748 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:52:22.824758 | orchestrator | 2025-06-02 17:52:22.824767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:52:22.824777 | orchestrator | Monday 02 June 2025 17:49:06 +0000 (0:00:00.351) 0:00:00.650 *********** 2025-06-02 17:52:22.824787 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-02 17:52:22.824796 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-02 17:52:22.824806 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-02 17:52:22.824815 | orchestrator | 2025-06-02 17:52:22.824839 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-02 17:52:22.824859 | orchestrator | 2025-06-02 17:52:22.824868 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:52:22.824878 | orchestrator | Monday 02 June 2025 17:49:06 +0000 (0:00:00.434) 0:00:01.085 *********** 2025-06-02 17:52:22.824887 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:52:22.824920 | orchestrator | 2025-06-02 17:52:22.824931 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-02 17:52:22.824940 | orchestrator | Monday 02 June 2025 17:49:07 +0000 (0:00:01.043) 0:00:02.129 *********** 2025-06-02 17:52:22.824950 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-02 17:52:22.824959 | orchestrator | 2025-06-02 17:52:22.825025 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-02 17:52:22.825035 | orchestrator | Monday 02 June 2025 17:49:12 +0000 (0:00:04.695) 0:00:06.825 *********** 2025-06-02 17:52:22.825045 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-02 17:52:22.825055 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-02 17:52:22.825064 | orchestrator | 2025-06-02 17:52:22.825074 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-02 17:52:22.825083 | orchestrator | Monday 02 June 2025 17:49:19 +0000 (0:00:06.711) 0:00:13.536 *********** 2025-06-02 17:52:22.825093 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-02 17:52:22.825103 | orchestrator | 2025-06-02 17:52:22.825112 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-02 17:52:22.825122 | orchestrator | Monday 02 June 2025 17:49:22 +0000 (0:00:03.537) 0:00:17.073 *********** 2025-06-02 17:52:22.825132 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:52:22.825142 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-02 17:52:22.825151 | orchestrator | 2025-06-02 17:52:22.825161 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-02 17:52:22.825171 | orchestrator | Monday 02 June 2025 17:49:26 +0000 (0:00:04.080) 0:00:21.154 *********** 2025-06-02 17:52:22.825181 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:52:22.825192 | orchestrator | 2025-06-02 17:52:22.825203 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-02 17:52:22.825214 | orchestrator | Monday 02 June 2025 17:49:30 +0000 (0:00:04.077) 0:00:25.232 *********** 2025-06-02 17:52:22.825226 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-02 17:52:22.825237 | orchestrator | 2025-06-02 17:52:22.825248 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-02 17:52:22.825259 | orchestrator | Monday 02 June 2025 17:49:36 +0000 (0:00:05.388) 0:00:30.620 *********** 2025-06-02 17:52:22.825307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.825332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.825346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.825358 | orchestrator | 2025-06-02 17:52:22.825370 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:52:22.825386 | orchestrator | Monday 02 June 2025 17:49:44 +0000 (0:00:08.093) 0:00:38.713 *********** 2025-06-02 17:52:22.825405 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:52:22.825431 | orchestrator | 2025-06-02 17:52:22.825456 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-02 17:52:22.825476 | orchestrator | Monday 02 June 2025 17:49:44 +0000 (0:00:00.594) 0:00:39.308 *********** 2025-06-02 17:52:22.825494 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:22.825510 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.825525 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:22.825541 | orchestrator | 2025-06-02 17:52:22.825558 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-02 17:52:22.825574 | orchestrator | Monday 02 June 2025 17:49:49 +0000 (0:00:04.283) 0:00:43.591 *********** 2025-06-02 17:52:22.825591 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825610 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825630 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825649 | orchestrator | 2025-06-02 17:52:22.825666 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-02 17:52:22.825684 | orchestrator | Monday 02 June 2025 17:49:50 +0000 (0:00:01.713) 0:00:45.305 *********** 2025-06-02 17:52:22.825700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825717 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:52:22.825751 | orchestrator | 2025-06-02 17:52:22.825767 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-02 17:52:22.825783 | orchestrator | Monday 02 June 2025 17:49:52 +0000 (0:00:01.225) 0:00:46.530 *********** 2025-06-02 17:52:22.825799 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:52:22.825814 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:52:22.825828 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:52:22.825844 | orchestrator | 2025-06-02 17:52:22.825861 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-02 17:52:22.825878 | orchestrator | Monday 02 June 2025 17:49:52 +0000 (0:00:00.803) 0:00:47.334 *********** 2025-06-02 17:52:22.825894 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.825911 | orchestrator | 2025-06-02 17:52:22.825921 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-02 17:52:22.825930 | orchestrator | Monday 02 June 2025 17:49:52 +0000 (0:00:00.139) 0:00:47.473 *********** 2025-06-02 17:52:22.825940 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.825949 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.825959 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826000 | orchestrator | 2025-06-02 17:52:22.826011 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:52:22.826084 | orchestrator | Monday 02 June 2025 17:49:53 +0000 (0:00:00.294) 0:00:47.768 *********** 2025-06-02 17:52:22.826095 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:52:22.826105 | orchestrator | 2025-06-02 17:52:22.826115 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-02 17:52:22.826124 | orchestrator | Monday 02 June 2025 17:49:53 +0000 (0:00:00.559) 0:00:48.328 *********** 2025-06-02 17:52:22.826155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826191 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826208 | orchestrator | 2025-06-02 17:52:22.826217 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-02 17:52:22.826230 | orchestrator | Monday 02 June 2025 17:50:00 +0000 (0:00:06.486) 0:00:54.815 *********** 2025-06-02 17:52:22.826266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826286 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.826306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826334 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826397 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.826415 | orchestrator | 2025-06-02 17:52:22.826433 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-02 17:52:22.826448 | orchestrator | Monday 02 June 2025 17:50:03 +0000 (0:00:03.498) 0:00:58.313 *********** 2025-06-02 17:52:22.826467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826499 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.826531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826544 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-02 17:52:22.826565 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.826575 | orchestrator | 2025-06-02 17:52:22.826584 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-02 17:52:22.826594 | orchestrator | Monday 02 June 2025 17:50:07 +0000 (0:00:03.809) 0:01:02.123 *********** 2025-06-02 17:52:22.826610 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826620 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.826629 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.826639 | orchestrator | 2025-06-02 17:52:22.826648 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-02 17:52:22.826658 | orchestrator | Monday 02 June 2025 17:50:12 +0000 (0:00:04.478) 0:01:06.601 *********** 2025-06-02 17:52:22.826684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.826723 | orchestrator | 2025-06-02 17:52:22.826733 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-02 17:52:22.826743 | orchestrator | Monday 02 June 2025 17:50:17 +0000 (0:00:05.368) 0:01:11.969 *********** 2025-06-02 17:52:22.826752 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.826762 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:22.826771 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:22.826780 | orchestrator | 2025-06-02 17:52:22.826790 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-02 17:52:22.826800 | orchestrator | Monday 02 June 2025 17:50:27 +0000 (0:00:09.653) 0:01:21.623 *********** 2025-06-02 17:52:22.826814 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.826824 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.826842 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826856 | orchestrator | 2025-06-02 17:52:22.826872 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-02 17:52:22.826898 | orchestrator | Monday 02 June 2025 17:50:32 +0000 (0:00:05.431) 0:01:27.055 *********** 2025-06-02 17:52:22.826915 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.826931 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.826947 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.826959 | orchestrator | 2025-06-02 17:52:22.827081 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-02 17:52:22.827094 | orchestrator | Monday 02 June 2025 17:50:39 +0000 (0:00:06.924) 0:01:33.979 *********** 2025-06-02 17:52:22.827104 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.827114 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.827123 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.827133 | orchestrator | 2025-06-02 17:52:22.827143 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-02 17:52:22.827153 | orchestrator | Monday 02 June 2025 17:50:44 +0000 (0:00:05.313) 0:01:39.293 *********** 2025-06-02 17:52:22.827163 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.827172 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.827182 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.827191 | orchestrator | 2025-06-02 17:52:22.827201 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-02 17:52:22.827210 | orchestrator | Monday 02 June 2025 17:50:51 +0000 (0:00:06.343) 0:01:45.637 *********** 2025-06-02 17:52:22.827231 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.827241 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.827250 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.827260 | orchestrator | 2025-06-02 17:52:22.827269 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-02 17:52:22.827279 | orchestrator | Monday 02 June 2025 17:50:51 +0000 (0:00:00.502) 0:01:46.139 *********** 2025-06-02 17:52:22.827289 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:52:22.827299 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.827309 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:52:22.827318 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.827328 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-02 17:52:22.827337 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.827347 | orchestrator | 2025-06-02 17:52:22.827356 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-02 17:52:22.827366 | orchestrator | Monday 02 June 2025 17:50:58 +0000 (0:00:06.470) 0:01:52.610 *********** 2025-06-02 17:52:22.827377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.827405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.827424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-02 17:52:22.827435 | orchestrator | 2025-06-02 17:52:22.827445 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-02 17:52:22.827454 | orchestrator | Monday 02 June 2025 17:51:03 +0000 (0:00:05.586) 0:01:58.196 *********** 2025-06-02 17:52:22.827464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:22.827474 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:22.827483 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:22.827493 | orchestrator | 2025-06-02 17:52:22.827502 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-02 17:52:22.827512 | orchestrator | Monday 02 June 2025 17:51:04 +0000 (0:00:00.356) 0:01:58.553 *********** 2025-06-02 17:52:22.827524 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.827585 | orchestrator | 2025-06-02 17:52:22.827606 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-02 17:52:22.827622 | orchestrator | Monday 02 June 2025 17:51:06 +0000 (0:00:02.168) 0:02:00.721 *********** 2025-06-02 17:52:22.827638 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.827653 | orchestrator | 2025-06-02 17:52:22.827668 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-02 17:52:22.827684 | orchestrator | Monday 02 June 2025 17:51:08 +0000 (0:00:02.542) 0:02:03.264 *********** 2025-06-02 17:52:22.827700 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.827715 | orchestrator | 2025-06-02 17:52:22.827731 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-02 17:52:22.827747 | orchestrator | Monday 02 June 2025 17:51:11 +0000 (0:00:02.546) 0:02:05.810 *********** 2025-06-02 17:52:22.827763 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.827791 | orchestrator | 2025-06-02 17:52:22.827815 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-02 17:52:22.827832 | orchestrator | Monday 02 June 2025 17:51:41 +0000 (0:00:30.046) 0:02:35.857 *********** 2025-06-02 17:52:22.827848 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.827865 | orchestrator | 2025-06-02 17:52:22.827891 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:52:22.827908 | orchestrator | Monday 02 June 2025 17:51:44 +0000 (0:00:03.585) 0:02:39.443 *********** 2025-06-02 17:52:22.827924 | orchestrator | 2025-06-02 17:52:22.827938 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:52:22.827953 | orchestrator | Monday 02 June 2025 17:51:45 +0000 (0:00:00.129) 0:02:39.572 *********** 2025-06-02 17:52:22.828040 | orchestrator | 2025-06-02 17:52:22.828060 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-02 17:52:22.828076 | orchestrator | Monday 02 June 2025 17:51:45 +0000 (0:00:00.070) 0:02:39.643 *********** 2025-06-02 17:52:22.828093 | orchestrator | 2025-06-02 17:52:22.828110 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-02 17:52:22.828126 | orchestrator | Monday 02 June 2025 17:51:45 +0000 (0:00:00.073) 0:02:39.717 *********** 2025-06-02 17:52:22.828142 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:22.828158 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:22.828174 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:22.828191 | orchestrator | 2025-06-02 17:52:22.828207 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:52:22.828225 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:52:22.828245 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:22.828263 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:22.828280 | orchestrator | 2025-06-02 17:52:22.828296 | orchestrator | 2025-06-02 17:52:22.828360 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:52:22.828381 | orchestrator | Monday 02 June 2025 17:52:21 +0000 (0:00:36.030) 0:03:15.747 *********** 2025-06-02 17:52:22.828398 | orchestrator | =============================================================================== 2025-06-02 17:52:22.828415 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.03s 2025-06-02 17:52:22.828432 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 30.05s 2025-06-02 17:52:22.828447 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 9.65s 2025-06-02 17:52:22.828462 | orchestrator | glance : Ensuring config directories exist ------------------------------ 8.09s 2025-06-02 17:52:22.828480 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 6.92s 2025-06-02 17:52:22.828498 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.71s 2025-06-02 17:52:22.828514 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.49s 2025-06-02 17:52:22.828529 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.47s 2025-06-02 17:52:22.828544 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 6.34s 2025-06-02 17:52:22.828560 | orchestrator | glance : Check glance containers ---------------------------------------- 5.59s 2025-06-02 17:52:22.828577 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.43s 2025-06-02 17:52:22.828594 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 5.39s 2025-06-02 17:52:22.828610 | orchestrator | glance : Copying over config.json files for services -------------------- 5.37s 2025-06-02 17:52:22.828628 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.31s 2025-06-02 17:52:22.828663 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.70s 2025-06-02 17:52:22.828682 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.48s 2025-06-02 17:52:22.828700 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.28s 2025-06-02 17:52:22.828719 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.08s 2025-06-02 17:52:22.828731 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 4.08s 2025-06-02 17:52:22.828741 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.81s 2025-06-02 17:52:22.828879 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:22.828894 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:22.828904 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:22.829529 | orchestrator | 2025-06-02 17:52:22 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:22.829627 | orchestrator | 2025-06-02 17:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:25.880304 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:25.883831 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:25.885345 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:25.887414 | orchestrator | 2025-06-02 17:52:25 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state STARTED 2025-06-02 17:52:25.887455 | orchestrator | 2025-06-02 17:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:28.938453 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:28.940499 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:28.942083 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:28.943169 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:28.946835 | orchestrator | 2025-06-02 17:52:28 | INFO  | Task 01a6b2d8-a66b-47d5-a503-1f45e50424a4 is in state SUCCESS 2025-06-02 17:52:28.948963 | orchestrator | 2025-06-02 17:52:28.949233 | orchestrator | 2025-06-02 17:52:28.949269 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:52:28.949292 | orchestrator | 2025-06-02 17:52:28.949310 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:52:28.949332 | orchestrator | Monday 02 June 2025 17:48:58 +0000 (0:00:00.261) 0:00:00.261 *********** 2025-06-02 17:52:28.949461 | orchestrator | ok: [testbed-manager] 2025-06-02 17:52:28.949486 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:52:28.949506 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:52:28.949524 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:52:28.949544 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:52:28.949565 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:52:28.949586 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:52:28.949633 | orchestrator | 2025-06-02 17:52:28.949652 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:52:28.949679 | orchestrator | Monday 02 June 2025 17:48:58 +0000 (0:00:00.709) 0:00:00.971 *********** 2025-06-02 17:52:28.949694 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949771 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949785 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949798 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949810 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949823 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949842 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-02 17:52:28.949865 | orchestrator | 2025-06-02 17:52:28.949892 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-02 17:52:28.949911 | orchestrator | 2025-06-02 17:52:28.949929 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 17:52:28.949946 | orchestrator | Monday 02 June 2025 17:48:59 +0000 (0:00:00.697) 0:00:01.669 *********** 2025-06-02 17:52:28.950154 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:52:28.950195 | orchestrator | 2025-06-02 17:52:28.950214 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-02 17:52:28.950230 | orchestrator | Monday 02 June 2025 17:49:01 +0000 (0:00:01.516) 0:00:03.185 *********** 2025-06-02 17:52:28.950260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950288 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:52:28.950327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950398 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950495 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950651 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:52:28.950674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.950720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950802 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950814 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950837 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950849 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.950932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.950944 | orchestrator | 2025-06-02 17:52:28.950955 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-02 17:52:28.950966 | orchestrator | Monday 02 June 2025 17:49:04 +0000 (0:00:03.562) 0:00:06.748 *********** 2025-06-02 17:52:28.951017 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:52:28.951029 | orchestrator | 2025-06-02 17:52:28.951040 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-02 17:52:28.951051 | orchestrator | Monday 02 June 2025 17:49:06 +0000 (0:00:01.585) 0:00:08.333 *********** 2025-06-02 17:52:28.951063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951173 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:52:28.951222 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951401 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951412 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.951424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951486 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951506 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951519 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951595 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:52:28.951618 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951631 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951642 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951678 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.951702 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.951757 | orchestrator | 2025-06-02 17:52:28.951768 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-02 17:52:28.951779 | orchestrator | Monday 02 June 2025 17:49:12 +0000 (0:00:06.611) 0:00:14.945 *********** 2025-06-02 17:52:28.951791 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:52:28.951803 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.951814 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.951837 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:52:28.951856 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.951867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.951879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.951890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.951903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.951924 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.951936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.951953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.951965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952019 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952088 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952128 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.952140 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.952151 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.952175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952215 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952233 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.952251 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952289 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952329 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.952356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952393 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952405 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.952416 | orchestrator | 2025-06-02 17:52:28.952428 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-02 17:52:28.952439 | orchestrator | Monday 02 June 2025 17:49:14 +0000 (0:00:01.589) 0:00:16.535 *********** 2025-06-02 17:52:28.952451 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-02 17:52:28.952471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952483 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952500 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-02 17:52:28.952514 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952599 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.952610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952689 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.952700 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.952712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-02 17:52:28.952775 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.952792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952835 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.952847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952883 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.952910 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-02 17:52:28.952930 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.952960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-02 17:52:28.953023 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.953043 | orchestrator | 2025-06-02 17:52:28.953062 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-02 17:52:28.953079 | orchestrator | Monday 02 June 2025 17:49:16 +0000 (0:00:01.795) 0:00:18.330 *********** 2025-06-02 17:52:28.953099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953119 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:52:28.953138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953186 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953208 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953240 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953267 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.953280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953316 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953333 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953346 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953372 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953420 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:52:28.953444 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953852 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.953888 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.953959 | orchestrator | 2025-06-02 17:52:28.954013 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-02 17:52:28.954082 | orchestrator | Monday 02 June 2025 17:49:22 +0000 (0:00:06.069) 0:00:24.399 *********** 2025-06-02 17:52:28.954101 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:52:28.954120 | orchestrator | 2025-06-02 17:52:28.954139 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-02 17:52:28.954171 | orchestrator | Monday 02 June 2025 17:49:23 +0000 (0:00:00.795) 0:00:25.195 *********** 2025-06-02 17:52:28.954194 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954218 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954239 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.954262 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954283 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954356 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.954378 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955224 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955264 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107975, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955276 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955288 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955322 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955393 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955416 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955436 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955456 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1107965, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9757383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.955478 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955501 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955542 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955564 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955640 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955664 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955679 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955690 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955701 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955728 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955740 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955783 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955796 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955809 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1107943, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9697382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.955822 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955843 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955861 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955875 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955917 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955930 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955941 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.955953 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956021 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956035 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956047 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1107945, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9707384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.956093 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956170 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956194 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956213 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956247 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956274 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956294 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956368 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956392 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956411 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956432 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956465 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956492 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1107960, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.956512 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956564 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956577 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956588 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956613 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956625 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956641 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956653 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956695 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956708 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956719 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956744 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956764 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1107948, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9727383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.956792 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956814 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956880 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956894 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956905 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956924 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956936 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.956953 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1107959, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.956965 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957043 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957072 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957092 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957105 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957122 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957133 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957154 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957173 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1107967, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.957210 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957235 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957253 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957281 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957299 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957329 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957349 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957389 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957410 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957430 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957456 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957476 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1107974, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.957505 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957520 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957540 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957551 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957562 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957579 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957591 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.957603 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957621 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957639 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957650 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957661 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957673 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957684 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.957700 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1107990, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.957712 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957756 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957775 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957794 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.957814 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.957833 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957845 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.957856 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957882 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-02 17:52:28.957900 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.957918 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1107970, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9767385, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.957958 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107947, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9717383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958121 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1107958, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958137 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1107941, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9687383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958149 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1107961, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9747384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958160 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1107988, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9817383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958178 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1107956, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9737384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958190 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1107977, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9777384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-02 17:52:28.958210 | orchestrator | 2025-06-02 17:52:28.958221 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-02 17:52:28.958233 | orchestrator | Monday 02 June 2025 17:49:53 +0000 (0:00:30.271) 0:00:55.466 *********** 2025-06-02 17:52:28.958253 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:52:28.958264 | orchestrator | 2025-06-02 17:52:28.958275 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-02 17:52:28.958286 | orchestrator | Monday 02 June 2025 17:49:54 +0000 (0:00:00.745) 0:00:56.212 *********** 2025-06-02 17:52:28.958298 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958309 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958321 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958331 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958342 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958353 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958363 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958374 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958385 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958396 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958407 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958418 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958429 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958440 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958450 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958461 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958471 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958482 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958492 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958503 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958514 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958524 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958535 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958546 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958556 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958565 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958575 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958584 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958594 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958603 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958613 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.958622 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958631 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-02 17:52:28.958647 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-02 17:52:28.958657 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-02 17:52:28.958666 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:52:28.958676 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:52:28.958685 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-02 17:52:28.958695 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:52:28.958704 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:52:28.958714 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-02 17:52:28.958723 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:52:28.958733 | orchestrator | 2025-06-02 17:52:28.958742 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-02 17:52:28.958752 | orchestrator | Monday 02 June 2025 17:49:57 +0000 (0:00:02.891) 0:00:59.104 *********** 2025-06-02 17:52:28.958762 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958778 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958788 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958798 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.958808 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.958817 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.958827 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958836 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.958846 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958855 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.958865 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-02 17:52:28.958874 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.958884 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-02 17:52:28.958894 | orchestrator | 2025-06-02 17:52:28.958903 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-02 17:52:28.958913 | orchestrator | Monday 02 June 2025 17:50:18 +0000 (0:00:21.828) 0:01:20.932 *********** 2025-06-02 17:52:28.958928 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.958938 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.958948 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.958958 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.958967 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.959004 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959014 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.959024 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959033 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.959043 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959053 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-02 17:52:28.959062 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959072 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-02 17:52:28.959082 | orchestrator | 2025-06-02 17:52:28.959091 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-02 17:52:28.959101 | orchestrator | Monday 02 June 2025 17:50:23 +0000 (0:00:04.563) 0:01:25.496 *********** 2025-06-02 17:52:28.959119 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959129 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.959139 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-02 17:52:28.959149 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959159 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959168 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959178 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959187 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959197 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959207 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959216 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.959225 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-02 17:52:28.959235 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959245 | orchestrator | 2025-06-02 17:52:28.959254 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-02 17:52:28.959264 | orchestrator | Monday 02 June 2025 17:50:26 +0000 (0:00:02.914) 0:01:28.410 *********** 2025-06-02 17:52:28.959273 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:52:28.959283 | orchestrator | 2025-06-02 17:52:28.959292 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-02 17:52:28.959302 | orchestrator | Monday 02 June 2025 17:50:27 +0000 (0:00:00.761) 0:01:29.172 *********** 2025-06-02 17:52:28.959311 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.959321 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.959330 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959339 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959349 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959358 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.959368 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959377 | orchestrator | 2025-06-02 17:52:28.959387 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-02 17:52:28.959396 | orchestrator | Monday 02 June 2025 17:50:28 +0000 (0:00:01.047) 0:01:30.219 *********** 2025-06-02 17:52:28.959411 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.959421 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959430 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.959439 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959449 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.959458 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.959468 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.959477 | orchestrator | 2025-06-02 17:52:28.959487 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-02 17:52:28.959497 | orchestrator | Monday 02 June 2025 17:50:31 +0000 (0:00:03.177) 0:01:33.396 *********** 2025-06-02 17:52:28.959507 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959516 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959526 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959535 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.959557 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.959566 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959576 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959585 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959601 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959611 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959621 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959630 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.959640 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-02 17:52:28.959649 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959659 | orchestrator | 2025-06-02 17:52:28.959668 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-02 17:52:28.959678 | orchestrator | Monday 02 June 2025 17:50:34 +0000 (0:00:02.893) 0:01:36.290 *********** 2025-06-02 17:52:28.959687 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959697 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.959707 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959716 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959726 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959735 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959745 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959754 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.959764 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-02 17:52:28.959774 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959783 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.959793 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-02 17:52:28.959802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.959812 | orchestrator | 2025-06-02 17:52:28.959821 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-02 17:52:28.959831 | orchestrator | Monday 02 June 2025 17:50:36 +0000 (0:00:02.536) 0:01:38.827 *********** 2025-06-02 17:52:28.959841 | orchestrator | [WARNING]: Skipped 2025-06-02 17:52:28.959850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-02 17:52:28.959859 | orchestrator | due to this access issue: 2025-06-02 17:52:28.959869 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-02 17:52:28.959879 | orchestrator | not a directory 2025-06-02 17:52:28.959888 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-02 17:52:28.959898 | orchestrator | 2025-06-02 17:52:28.959907 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-02 17:52:28.959917 | orchestrator | Monday 02 June 2025 17:50:38 +0000 (0:00:01.885) 0:01:40.713 *********** 2025-06-02 17:52:28.959926 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.959936 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.959945 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.959955 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.959964 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.960023 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.960041 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.960068 | orchestrator | 2025-06-02 17:52:28.960086 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-02 17:52:28.960101 | orchestrator | Monday 02 June 2025 17:50:39 +0000 (0:00:01.271) 0:01:41.984 *********** 2025-06-02 17:52:28.960116 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.960131 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:52:28.960147 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:52:28.960165 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:52:28.960181 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:52:28.960198 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:52:28.960214 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:52:28.960230 | orchestrator | 2025-06-02 17:52:28.960248 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-02 17:52:28.960274 | orchestrator | Monday 02 June 2025 17:50:41 +0000 (0:00:01.229) 0:01:43.214 *********** 2025-06-02 17:52:28.960293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-02 17:52:28.960343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960361 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960380 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960400 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960455 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960468 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-02 17:52:28.960486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960509 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960554 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960576 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960623 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-02 17:52:28.960640 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960667 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960730 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960780 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-02 17:52:28.960794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-02 17:52:28.960839 | orchestrator | 2025-06-02 17:52:28.960848 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-02 17:52:28.960856 | orchestrator | Monday 02 June 2025 17:50:46 +0000 (0:00:05.250) 0:01:48.465 *********** 2025-06-02 17:52:28.960864 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-02 17:52:28.960872 | orchestrator | skipping: [testbed-manager] 2025-06-02 17:52:28.960880 | orchestrator | 2025-06-02 17:52:28.960888 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.960896 | orchestrator | Monday 02 June 2025 17:50:48 +0000 (0:00:01.691) 0:01:50.156 *********** 2025-06-02 17:52:28.960903 | orchestrator | 2025-06-02 17:52:28.960911 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.960918 | orchestrator | Monday 02 June 2025 17:50:48 +0000 (0:00:00.542) 0:01:50.699 *********** 2025-06-02 17:52:28.960926 | orchestrator | 2025-06-02 17:52:28.960934 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.960946 | orchestrator | Monday 02 June 2025 17:50:48 +0000 (0:00:00.139) 0:01:50.839 *********** 2025-06-02 17:52:28.960954 | orchestrator | 2025-06-02 17:52:28.960962 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.960994 | orchestrator | Monday 02 June 2025 17:50:48 +0000 (0:00:00.247) 0:01:51.087 *********** 2025-06-02 17:52:28.961011 | orchestrator | 2025-06-02 17:52:28.961019 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.961027 | orchestrator | Monday 02 June 2025 17:50:49 +0000 (0:00:00.077) 0:01:51.164 *********** 2025-06-02 17:52:28.961035 | orchestrator | 2025-06-02 17:52:28.961043 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.961050 | orchestrator | Monday 02 June 2025 17:50:49 +0000 (0:00:00.056) 0:01:51.220 *********** 2025-06-02 17:52:28.961058 | orchestrator | 2025-06-02 17:52:28.961066 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-02 17:52:28.961073 | orchestrator | Monday 02 June 2025 17:50:49 +0000 (0:00:00.058) 0:01:51.279 *********** 2025-06-02 17:52:28.961081 | orchestrator | 2025-06-02 17:52:28.961089 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-02 17:52:28.961096 | orchestrator | Monday 02 June 2025 17:50:49 +0000 (0:00:00.080) 0:01:51.359 *********** 2025-06-02 17:52:28.961104 | orchestrator | changed: [testbed-manager] 2025-06-02 17:52:28.961112 | orchestrator | 2025-06-02 17:52:28.961120 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-02 17:52:28.961133 | orchestrator | Monday 02 June 2025 17:51:06 +0000 (0:00:17.636) 0:02:08.996 *********** 2025-06-02 17:52:28.961141 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.961149 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:28.961157 | orchestrator | changed: [testbed-manager] 2025-06-02 17:52:28.961165 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.961178 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:28.961186 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.961194 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:28.961201 | orchestrator | 2025-06-02 17:52:28.961211 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-02 17:52:28.961224 | orchestrator | Monday 02 June 2025 17:51:20 +0000 (0:00:13.874) 0:02:22.870 *********** 2025-06-02 17:52:28.961236 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.961249 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.961262 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.961275 | orchestrator | 2025-06-02 17:52:28.961287 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-02 17:52:28.961302 | orchestrator | Monday 02 June 2025 17:51:30 +0000 (0:00:09.691) 0:02:32.562 *********** 2025-06-02 17:52:28.961318 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.961332 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.961346 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.961359 | orchestrator | 2025-06-02 17:52:28.961372 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-02 17:52:28.961386 | orchestrator | Monday 02 June 2025 17:51:40 +0000 (0:00:09.644) 0:02:42.206 *********** 2025-06-02 17:52:28.961399 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.961413 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.961423 | orchestrator | changed: [testbed-manager] 2025-06-02 17:52:28.961430 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:28.961438 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:28.961446 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:28.961453 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.961461 | orchestrator | 2025-06-02 17:52:28.961469 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-02 17:52:28.961476 | orchestrator | Monday 02 June 2025 17:51:51 +0000 (0:00:11.077) 0:02:53.284 *********** 2025-06-02 17:52:28.961484 | orchestrator | changed: [testbed-manager] 2025-06-02 17:52:28.961491 | orchestrator | 2025-06-02 17:52:28.961499 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-02 17:52:28.961507 | orchestrator | Monday 02 June 2025 17:52:03 +0000 (0:00:12.253) 0:03:05.537 *********** 2025-06-02 17:52:28.961533 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:52:28.961541 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:52:28.961549 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:52:28.961557 | orchestrator | 2025-06-02 17:52:28.961565 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-02 17:52:28.961572 | orchestrator | Monday 02 June 2025 17:52:15 +0000 (0:00:12.161) 0:03:17.699 *********** 2025-06-02 17:52:28.961580 | orchestrator | changed: [testbed-manager] 2025-06-02 17:52:28.961588 | orchestrator | 2025-06-02 17:52:28.961596 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-02 17:52:28.961603 | orchestrator | Monday 02 June 2025 17:52:20 +0000 (0:00:05.242) 0:03:22.941 *********** 2025-06-02 17:52:28.961611 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:52:28.961619 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:52:28.961627 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:52:28.961634 | orchestrator | 2025-06-02 17:52:28.961642 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:52:28.961651 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:52:28.961660 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:28.961668 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:28.961676 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:52:28.961696 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:52:28.961704 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:52:28.961712 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-02 17:52:28.961720 | orchestrator | 2025-06-02 17:52:28.961728 | orchestrator | 2025-06-02 17:52:28.961736 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:52:28.961744 | orchestrator | Monday 02 June 2025 17:52:27 +0000 (0:00:06.322) 0:03:29.263 *********** 2025-06-02 17:52:28.961752 | orchestrator | =============================================================================== 2025-06-02 17:52:28.961759 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 30.27s 2025-06-02 17:52:28.961767 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 21.83s 2025-06-02 17:52:28.961775 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.64s 2025-06-02 17:52:28.961782 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.87s 2025-06-02 17:52:28.961796 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 12.25s 2025-06-02 17:52:28.961805 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 12.16s 2025-06-02 17:52:28.961812 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 11.08s 2025-06-02 17:52:28.961820 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 9.69s 2025-06-02 17:52:28.961828 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.64s 2025-06-02 17:52:28.961836 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.61s 2025-06-02 17:52:28.961843 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.32s 2025-06-02 17:52:28.961851 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.07s 2025-06-02 17:52:28.961859 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.25s 2025-06-02 17:52:28.961866 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.24s 2025-06-02 17:52:28.961874 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.56s 2025-06-02 17:52:28.961882 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.56s 2025-06-02 17:52:28.961890 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.18s 2025-06-02 17:52:28.961898 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.91s 2025-06-02 17:52:28.961905 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.89s 2025-06-02 17:52:28.961913 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.89s 2025-06-02 17:52:28.961921 | orchestrator | 2025-06-02 17:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:31.994179 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:32.002527 | orchestrator | 2025-06-02 17:52:31 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:32.002596 | orchestrator | 2025-06-02 17:52:32 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:32.003518 | orchestrator | 2025-06-02 17:52:32 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:32.004364 | orchestrator | 2025-06-02 17:52:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:35.063258 | orchestrator | 2025-06-02 17:52:35 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:35.065306 | orchestrator | 2025-06-02 17:52:35 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:35.067307 | orchestrator | 2025-06-02 17:52:35 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:35.069069 | orchestrator | 2025-06-02 17:52:35 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:35.069114 | orchestrator | 2025-06-02 17:52:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:38.104355 | orchestrator | 2025-06-02 17:52:38 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:38.107432 | orchestrator | 2025-06-02 17:52:38 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:38.109109 | orchestrator | 2025-06-02 17:52:38 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:38.111274 | orchestrator | 2025-06-02 17:52:38 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:38.111320 | orchestrator | 2025-06-02 17:52:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:41.156086 | orchestrator | 2025-06-02 17:52:41 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:41.157066 | orchestrator | 2025-06-02 17:52:41 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:41.158419 | orchestrator | 2025-06-02 17:52:41 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:41.159340 | orchestrator | 2025-06-02 17:52:41 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:41.159376 | orchestrator | 2025-06-02 17:52:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:44.206442 | orchestrator | 2025-06-02 17:52:44 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:44.208276 | orchestrator | 2025-06-02 17:52:44 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:44.210170 | orchestrator | 2025-06-02 17:52:44 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:44.211946 | orchestrator | 2025-06-02 17:52:44 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:44.212023 | orchestrator | 2025-06-02 17:52:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:47.254843 | orchestrator | 2025-06-02 17:52:47 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:47.257749 | orchestrator | 2025-06-02 17:52:47 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:47.261120 | orchestrator | 2025-06-02 17:52:47 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:47.261189 | orchestrator | 2025-06-02 17:52:47 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:47.261197 | orchestrator | 2025-06-02 17:52:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:50.300759 | orchestrator | 2025-06-02 17:52:50 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:50.301258 | orchestrator | 2025-06-02 17:52:50 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:50.302420 | orchestrator | 2025-06-02 17:52:50 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:50.303404 | orchestrator | 2025-06-02 17:52:50 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:50.303430 | orchestrator | 2025-06-02 17:52:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:53.342659 | orchestrator | 2025-06-02 17:52:53 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:53.343786 | orchestrator | 2025-06-02 17:52:53 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:53.345749 | orchestrator | 2025-06-02 17:52:53 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:53.347468 | orchestrator | 2025-06-02 17:52:53 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:53.347518 | orchestrator | 2025-06-02 17:52:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:56.380879 | orchestrator | 2025-06-02 17:52:56 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:56.384836 | orchestrator | 2025-06-02 17:52:56 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:56.390158 | orchestrator | 2025-06-02 17:52:56 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:56.390265 | orchestrator | 2025-06-02 17:52:56 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:56.390286 | orchestrator | 2025-06-02 17:52:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:52:59.422621 | orchestrator | 2025-06-02 17:52:59 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:52:59.423524 | orchestrator | 2025-06-02 17:52:59 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:52:59.424557 | orchestrator | 2025-06-02 17:52:59 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:52:59.426368 | orchestrator | 2025-06-02 17:52:59 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:52:59.426408 | orchestrator | 2025-06-02 17:52:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:02.475086 | orchestrator | 2025-06-02 17:53:02 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:02.475193 | orchestrator | 2025-06-02 17:53:02 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:02.475761 | orchestrator | 2025-06-02 17:53:02 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:02.477402 | orchestrator | 2025-06-02 17:53:02 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:02.477443 | orchestrator | 2025-06-02 17:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:05.521088 | orchestrator | 2025-06-02 17:53:05 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:05.521293 | orchestrator | 2025-06-02 17:53:05 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:05.522087 | orchestrator | 2025-06-02 17:53:05 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:05.522622 | orchestrator | 2025-06-02 17:53:05 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:05.522650 | orchestrator | 2025-06-02 17:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:08.558439 | orchestrator | 2025-06-02 17:53:08 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:08.558723 | orchestrator | 2025-06-02 17:53:08 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:08.559442 | orchestrator | 2025-06-02 17:53:08 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:08.560001 | orchestrator | 2025-06-02 17:53:08 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:08.560045 | orchestrator | 2025-06-02 17:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:11.602556 | orchestrator | 2025-06-02 17:53:11 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:11.603832 | orchestrator | 2025-06-02 17:53:11 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:11.604793 | orchestrator | 2025-06-02 17:53:11 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:11.607353 | orchestrator | 2025-06-02 17:53:11 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:11.607389 | orchestrator | 2025-06-02 17:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:14.644010 | orchestrator | 2025-06-02 17:53:14 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:14.644405 | orchestrator | 2025-06-02 17:53:14 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:14.645461 | orchestrator | 2025-06-02 17:53:14 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:14.646636 | orchestrator | 2025-06-02 17:53:14 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:14.646684 | orchestrator | 2025-06-02 17:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:17.684401 | orchestrator | 2025-06-02 17:53:17 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:17.684663 | orchestrator | 2025-06-02 17:53:17 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:17.685691 | orchestrator | 2025-06-02 17:53:17 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:17.688659 | orchestrator | 2025-06-02 17:53:17 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:17.688758 | orchestrator | 2025-06-02 17:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:20.728319 | orchestrator | 2025-06-02 17:53:20 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:20.728488 | orchestrator | 2025-06-02 17:53:20 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:20.729181 | orchestrator | 2025-06-02 17:53:20 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:20.729745 | orchestrator | 2025-06-02 17:53:20 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:20.729772 | orchestrator | 2025-06-02 17:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:23.758190 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:23.758639 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:23.759421 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:23.759832 | orchestrator | 2025-06-02 17:53:23 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:23.759884 | orchestrator | 2025-06-02 17:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:26.786704 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:26.786857 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:26.787381 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:26.787920 | orchestrator | 2025-06-02 17:53:26 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:26.787932 | orchestrator | 2025-06-02 17:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:29.812444 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:29.812643 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:29.813047 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:29.813794 | orchestrator | 2025-06-02 17:53:29 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state STARTED 2025-06-02 17:53:29.813827 | orchestrator | 2025-06-02 17:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:32.838870 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:32.838972 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:32.839287 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:32.841955 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:32.843180 | orchestrator | 2025-06-02 17:53:32 | INFO  | Task 1444b7b1-2deb-4e7e-9efb-8b4a6ee2ba82 is in state SUCCESS 2025-06-02 17:53:32.844655 | orchestrator | 2025-06-02 17:53:32.844692 | orchestrator | 2025-06-02 17:53:32.844763 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:53:32.844780 | orchestrator | 2025-06-02 17:53:32.844798 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:53:32.844818 | orchestrator | Monday 02 June 2025 17:49:30 +0000 (0:00:00.266) 0:00:00.266 *********** 2025-06-02 17:53:32.844836 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:53:32.844959 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:53:32.844976 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:53:32.844987 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:53:32.844998 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:53:32.845009 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:53:32.845020 | orchestrator | 2025-06-02 17:53:32.845031 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:53:32.845042 | orchestrator | Monday 02 June 2025 17:49:32 +0000 (0:00:01.631) 0:00:01.898 *********** 2025-06-02 17:53:32.845054 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-02 17:53:32.845102 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-02 17:53:32.845114 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-02 17:53:32.845125 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-02 17:53:32.845136 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-02 17:53:32.845291 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-02 17:53:32.845307 | orchestrator | 2025-06-02 17:53:32.845321 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-02 17:53:32.845695 | orchestrator | 2025-06-02 17:53:32.845714 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:53:32.845726 | orchestrator | Monday 02 June 2025 17:49:33 +0000 (0:00:01.375) 0:00:03.273 *********** 2025-06-02 17:53:32.845761 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:53:32.845774 | orchestrator | 2025-06-02 17:53:32.845785 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-02 17:53:32.845796 | orchestrator | Monday 02 June 2025 17:49:38 +0000 (0:00:04.206) 0:00:07.480 *********** 2025-06-02 17:53:32.845809 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-02 17:53:32.845820 | orchestrator | 2025-06-02 17:53:32.845878 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-02 17:53:32.845890 | orchestrator | Monday 02 June 2025 17:49:42 +0000 (0:00:04.259) 0:00:11.739 *********** 2025-06-02 17:53:32.845901 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-02 17:53:32.845990 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-02 17:53:32.846003 | orchestrator | 2025-06-02 17:53:32.846374 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-02 17:53:32.846388 | orchestrator | Monday 02 June 2025 17:49:49 +0000 (0:00:06.676) 0:00:18.416 *********** 2025-06-02 17:53:32.846399 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:53:32.846411 | orchestrator | 2025-06-02 17:53:32.846422 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-02 17:53:32.846433 | orchestrator | Monday 02 June 2025 17:49:51 +0000 (0:00:02.909) 0:00:21.325 *********** 2025-06-02 17:53:32.846444 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:53:32.846455 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-02 17:53:32.846466 | orchestrator | 2025-06-02 17:53:32.846477 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-02 17:53:32.846488 | orchestrator | Monday 02 June 2025 17:49:55 +0000 (0:00:03.562) 0:00:24.887 *********** 2025-06-02 17:53:32.846498 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:53:32.846509 | orchestrator | 2025-06-02 17:53:32.846520 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-02 17:53:32.846531 | orchestrator | Monday 02 June 2025 17:49:59 +0000 (0:00:03.655) 0:00:28.542 *********** 2025-06-02 17:53:32.846542 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-02 17:53:32.846552 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-02 17:53:32.846563 | orchestrator | 2025-06-02 17:53:32.846574 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-02 17:53:32.846585 | orchestrator | Monday 02 June 2025 17:50:07 +0000 (0:00:08.324) 0:00:36.867 *********** 2025-06-02 17:53:32.846600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.846661 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.846689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.846710 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846735 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846825 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846838 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.846868 | orchestrator | 2025-06-02 17:53:32.846904 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:53:32.846916 | orchestrator | Monday 02 June 2025 17:50:10 +0000 (0:00:03.195) 0:00:40.063 *********** 2025-06-02 17:53:32.846927 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.846938 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.846949 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.846960 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.846970 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.846981 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.846992 | orchestrator | 2025-06-02 17:53:32.847004 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:53:32.847016 | orchestrator | Monday 02 June 2025 17:50:11 +0000 (0:00:00.954) 0:00:41.017 *********** 2025-06-02 17:53:32.847029 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.847042 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.847054 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.847107 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:53:32.847120 | orchestrator | 2025-06-02 17:53:32.847132 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-02 17:53:32.847144 | orchestrator | Monday 02 June 2025 17:50:12 +0000 (0:00:01.125) 0:00:42.143 *********** 2025-06-02 17:53:32.847157 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-02 17:53:32.847170 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-02 17:53:32.847182 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-02 17:53:32.847194 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-02 17:53:32.847207 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-02 17:53:32.847219 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-02 17:53:32.847230 | orchestrator | 2025-06-02 17:53:32.847243 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-02 17:53:32.847255 | orchestrator | Monday 02 June 2025 17:50:15 +0000 (0:00:02.529) 0:00:44.672 *********** 2025-06-02 17:53:32.847274 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847288 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847302 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847354 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847370 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847389 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-02 17:53:32.847401 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847420 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847457 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847469 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847487 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847499 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-02 17:53:32.847517 | orchestrator | 2025-06-02 17:53:32.847528 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-02 17:53:32.847539 | orchestrator | Monday 02 June 2025 17:50:19 +0000 (0:00:04.379) 0:00:49.051 *********** 2025-06-02 17:53:32.847550 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:53:32.847562 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:53:32.847574 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-02 17:53:32.847584 | orchestrator | 2025-06-02 17:53:32.847595 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-02 17:53:32.847606 | orchestrator | Monday 02 June 2025 17:50:22 +0000 (0:00:03.124) 0:00:52.175 *********** 2025-06-02 17:53:32.847617 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-02 17:53:32.847628 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-02 17:53:32.847638 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-02 17:53:32.847649 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:53:32.847660 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:53:32.847695 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-02 17:53:32.847707 | orchestrator | 2025-06-02 17:53:32.847718 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-02 17:53:32.847729 | orchestrator | Monday 02 June 2025 17:50:26 +0000 (0:00:03.917) 0:00:56.093 *********** 2025-06-02 17:53:32.847740 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-02 17:53:32.847751 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-02 17:53:32.847761 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-02 17:53:32.847772 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-02 17:53:32.847783 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-02 17:53:32.847793 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-02 17:53:32.847804 | orchestrator | 2025-06-02 17:53:32.847815 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-02 17:53:32.847825 | orchestrator | Monday 02 June 2025 17:50:28 +0000 (0:00:01.329) 0:00:57.422 *********** 2025-06-02 17:53:32.847836 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.847847 | orchestrator | 2025-06-02 17:53:32.847857 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-02 17:53:32.847868 | orchestrator | Monday 02 June 2025 17:50:28 +0000 (0:00:00.348) 0:00:57.771 *********** 2025-06-02 17:53:32.847879 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.847889 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.847900 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.847910 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.847921 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.847931 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.847942 | orchestrator | 2025-06-02 17:53:32.847952 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:53:32.847963 | orchestrator | Monday 02 June 2025 17:50:30 +0000 (0:00:01.993) 0:00:59.765 *********** 2025-06-02 17:53:32.847975 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:53:32.847987 | orchestrator | 2025-06-02 17:53:32.847998 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-02 17:53:32.848008 | orchestrator | Monday 02 June 2025 17:50:31 +0000 (0:00:01.222) 0:01:00.988 *********** 2025-06-02 17:53:32.848037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848167 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848205 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848217 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848272 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848283 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848305 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848315 | orchestrator | 2025-06-02 17:53:32.848325 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-02 17:53:32.848335 | orchestrator | Monday 02 June 2025 17:50:35 +0000 (0:00:04.256) 0:01:05.245 *********** 2025-06-02 17:53:32.848345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848398 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.848413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848451 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848462 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.848471 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.848481 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.848491 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848512 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848522 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.848533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848543 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848552 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.848562 | orchestrator | 2025-06-02 17:53:32.848572 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-02 17:53:32.848581 | orchestrator | Monday 02 June 2025 17:50:38 +0000 (0:00:02.496) 0:01:07.741 *********** 2025-06-02 17:53:32.848597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848638 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.848648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848658 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.848668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.848687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848697 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.848713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848733 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.848748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848758 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848768 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.848783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.848810 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.848820 | orchestrator | 2025-06-02 17:53:32.848830 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-02 17:53:32.848840 | orchestrator | Monday 02 June 2025 17:50:40 +0000 (0:00:02.286) 0:01:10.027 *********** 2025-06-02 17:53:32.848854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848875 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.848892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848912 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848983 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.848994 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849009 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849020 | orchestrator | 2025-06-02 17:53:32.849029 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-02 17:53:32.849039 | orchestrator | Monday 02 June 2025 17:50:44 +0000 (0:00:04.139) 0:01:14.166 *********** 2025-06-02 17:53:32.849049 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:53:32.849059 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:53:32.849100 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.849111 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:53:32.849121 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.849131 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-02 17:53:32.849141 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.849151 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:53:32.849160 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-02 17:53:32.849170 | orchestrator | 2025-06-02 17:53:32.849180 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-02 17:53:32.849189 | orchestrator | Monday 02 June 2025 17:50:47 +0000 (0:00:02.805) 0:01:16.972 *********** 2025-06-02 17:53:32.849199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'regist2025-06-02 17:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:32.849234 | orchestrator | ry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849260 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849338 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849375 | orchestrator | 2025-06-02 17:53:32.849389 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-02 17:53:32.849400 | orchestrator | Monday 02 June 2025 17:50:58 +0000 (0:00:11.066) 0:01:28.039 *********** 2025-06-02 17:53:32.849409 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.849420 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.849430 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.849439 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:53:32.849448 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:53:32.849458 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:53:32.849468 | orchestrator | 2025-06-02 17:53:32.849477 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-02 17:53:32.849487 | orchestrator | Monday 02 June 2025 17:51:02 +0000 (0:00:03.408) 0:01:31.448 *********** 2025-06-02 17:53:32.849497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.849512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849523 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.849533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.849549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849560 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.849578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-02 17:53:32.849620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849636 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.849646 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.849656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849667 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849682 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.849693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849703 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-02 17:53:32.849713 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.849723 | orchestrator | 2025-06-02 17:53:32.849733 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-02 17:53:32.849743 | orchestrator | Monday 02 June 2025 17:51:03 +0000 (0:00:01.733) 0:01:33.181 *********** 2025-06-02 17:53:32.849752 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.849762 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.849777 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.849787 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.849801 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.849811 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.849820 | orchestrator | 2025-06-02 17:53:32.849830 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-02 17:53:32.849840 | orchestrator | Monday 02 June 2025 17:51:04 +0000 (0:00:00.948) 0:01:34.130 *********** 2025-06-02 17:53:32.849849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-02 17:53:32.849910 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849987 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.849998 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-02 17:53:32.850008 | orchestrator | 2025-06-02 17:53:32.850051 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-02 17:53:32.850107 | orchestrator | Monday 02 June 2025 17:51:07 +0000 (0:00:02.885) 0:01:37.015 *********** 2025-06-02 17:53:32.850120 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.850130 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:53:32.850140 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:53:32.850149 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:53:32.850159 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:53:32.850169 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:53:32.850178 | orchestrator | 2025-06-02 17:53:32.850188 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-02 17:53:32.850198 | orchestrator | Monday 02 June 2025 17:51:09 +0000 (0:00:02.001) 0:01:39.016 *********** 2025-06-02 17:53:32.850208 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:32.850218 | orchestrator | 2025-06-02 17:53:32.850227 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-02 17:53:32.850236 | orchestrator | Monday 02 June 2025 17:51:12 +0000 (0:00:02.597) 0:01:41.614 *********** 2025-06-02 17:53:32.850246 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:32.850255 | orchestrator | 2025-06-02 17:53:32.850265 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-02 17:53:32.850274 | orchestrator | Monday 02 June 2025 17:51:14 +0000 (0:00:02.399) 0:01:44.014 *********** 2025-06-02 17:53:32.850282 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:32.850290 | orchestrator | 2025-06-02 17:53:32.850302 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850311 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:19.855) 0:02:03.869 *********** 2025-06-02 17:53:32.850319 | orchestrator | 2025-06-02 17:53:32.850327 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850335 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.084) 0:02:03.954 *********** 2025-06-02 17:53:32.850343 | orchestrator | 2025-06-02 17:53:32.850351 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850358 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.064) 0:02:04.019 *********** 2025-06-02 17:53:32.850374 | orchestrator | 2025-06-02 17:53:32.850382 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850390 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.063) 0:02:04.082 *********** 2025-06-02 17:53:32.850398 | orchestrator | 2025-06-02 17:53:32.850406 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850414 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.063) 0:02:04.145 *********** 2025-06-02 17:53:32.850421 | orchestrator | 2025-06-02 17:53:32.850429 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-02 17:53:32.850437 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.064) 0:02:04.210 *********** 2025-06-02 17:53:32.850445 | orchestrator | 2025-06-02 17:53:32.850453 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-02 17:53:32.850461 | orchestrator | Monday 02 June 2025 17:51:34 +0000 (0:00:00.061) 0:02:04.271 *********** 2025-06-02 17:53:32.850468 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:32.850476 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:32.850484 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:32.850492 | orchestrator | 2025-06-02 17:53:32.850500 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-02 17:53:32.850507 | orchestrator | Monday 02 June 2025 17:51:56 +0000 (0:00:21.818) 0:02:26.089 *********** 2025-06-02 17:53:32.850515 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:53:32.850523 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:53:32.850531 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:53:32.850539 | orchestrator | 2025-06-02 17:53:32.850546 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-02 17:53:32.850554 | orchestrator | Monday 02 June 2025 17:52:03 +0000 (0:00:06.770) 0:02:32.860 *********** 2025-06-02 17:53:32.850562 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:53:32.850570 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:53:32.850578 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:53:32.850586 | orchestrator | 2025-06-02 17:53:32.850594 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-02 17:53:32.850602 | orchestrator | Monday 02 June 2025 17:53:21 +0000 (0:01:17.903) 0:03:50.764 *********** 2025-06-02 17:53:32.850609 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:53:32.850622 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:53:32.850630 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:53:32.850638 | orchestrator | 2025-06-02 17:53:32.850646 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-02 17:53:32.850654 | orchestrator | Monday 02 June 2025 17:53:30 +0000 (0:00:09.219) 0:03:59.983 *********** 2025-06-02 17:53:32.850662 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:53:32.850670 | orchestrator | 2025-06-02 17:53:32.850678 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:53:32.850686 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-02 17:53:32.850695 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 17:53:32.850703 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 17:53:32.850711 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:53:32.850719 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:53:32.850726 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-02 17:53:32.850739 | orchestrator | 2025-06-02 17:53:32.850747 | orchestrator | 2025-06-02 17:53:32.850755 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:53:32.850763 | orchestrator | Monday 02 June 2025 17:53:31 +0000 (0:00:00.613) 0:04:00.597 *********** 2025-06-02 17:53:32.850771 | orchestrator | =============================================================================== 2025-06-02 17:53:32.850779 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 77.90s 2025-06-02 17:53:32.850787 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.82s 2025-06-02 17:53:32.850794 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.86s 2025-06-02 17:53:32.850802 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 11.07s 2025-06-02 17:53:32.850810 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 9.22s 2025-06-02 17:53:32.850818 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.32s 2025-06-02 17:53:32.850826 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 6.77s 2025-06-02 17:53:32.850837 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.68s 2025-06-02 17:53:32.850845 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.38s 2025-06-02 17:53:32.850853 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 4.26s 2025-06-02 17:53:32.850861 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.26s 2025-06-02 17:53:32.850868 | orchestrator | cinder : include_tasks -------------------------------------------------- 4.21s 2025-06-02 17:53:32.850876 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.14s 2025-06-02 17:53:32.850884 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.92s 2025-06-02 17:53:32.850892 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.66s 2025-06-02 17:53:32.850900 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.56s 2025-06-02 17:53:32.850908 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.41s 2025-06-02 17:53:32.850916 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 3.20s 2025-06-02 17:53:32.850923 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.12s 2025-06-02 17:53:32.850931 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.91s 2025-06-02 17:53:35.872798 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:35.872879 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:35.872888 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:35.873259 | orchestrator | 2025-06-02 17:53:35 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:35.873290 | orchestrator | 2025-06-02 17:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:38.907834 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:38.907937 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:38.908531 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:38.909228 | orchestrator | 2025-06-02 17:53:38 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:38.909263 | orchestrator | 2025-06-02 17:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:41.951435 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:41.955332 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:41.957273 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:41.958218 | orchestrator | 2025-06-02 17:53:41 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:41.958545 | orchestrator | 2025-06-02 17:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:45.003891 | orchestrator | 2025-06-02 17:53:45 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:45.004266 | orchestrator | 2025-06-02 17:53:45 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:45.005315 | orchestrator | 2025-06-02 17:53:45 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:45.010884 | orchestrator | 2025-06-02 17:53:45 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:45.011072 | orchestrator | 2025-06-02 17:53:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:48.052031 | orchestrator | 2025-06-02 17:53:48 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:48.055314 | orchestrator | 2025-06-02 17:53:48 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:48.059493 | orchestrator | 2025-06-02 17:53:48 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:48.060749 | orchestrator | 2025-06-02 17:53:48 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:48.060808 | orchestrator | 2025-06-02 17:53:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:51.102359 | orchestrator | 2025-06-02 17:53:51 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:51.102639 | orchestrator | 2025-06-02 17:53:51 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:51.103443 | orchestrator | 2025-06-02 17:53:51 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:51.104495 | orchestrator | 2025-06-02 17:53:51 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:51.104543 | orchestrator | 2025-06-02 17:53:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:54.154970 | orchestrator | 2025-06-02 17:53:54 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:54.155356 | orchestrator | 2025-06-02 17:53:54 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:54.156158 | orchestrator | 2025-06-02 17:53:54 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:54.156988 | orchestrator | 2025-06-02 17:53:54 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:54.157021 | orchestrator | 2025-06-02 17:53:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:53:57.209845 | orchestrator | 2025-06-02 17:53:57 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:53:57.211650 | orchestrator | 2025-06-02 17:53:57 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:53:57.213493 | orchestrator | 2025-06-02 17:53:57 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:53:57.217215 | orchestrator | 2025-06-02 17:53:57 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:53:57.217326 | orchestrator | 2025-06-02 17:53:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:00.260541 | orchestrator | 2025-06-02 17:54:00 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:00.261688 | orchestrator | 2025-06-02 17:54:00 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:00.263157 | orchestrator | 2025-06-02 17:54:00 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:00.264586 | orchestrator | 2025-06-02 17:54:00 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:00.264628 | orchestrator | 2025-06-02 17:54:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:03.298610 | orchestrator | 2025-06-02 17:54:03 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:03.299445 | orchestrator | 2025-06-02 17:54:03 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:03.301368 | orchestrator | 2025-06-02 17:54:03 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:03.302039 | orchestrator | 2025-06-02 17:54:03 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:03.302244 | orchestrator | 2025-06-02 17:54:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:06.341206 | orchestrator | 2025-06-02 17:54:06 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:06.341644 | orchestrator | 2025-06-02 17:54:06 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:06.342853 | orchestrator | 2025-06-02 17:54:06 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:06.343820 | orchestrator | 2025-06-02 17:54:06 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:06.343857 | orchestrator | 2025-06-02 17:54:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:09.389329 | orchestrator | 2025-06-02 17:54:09 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:09.390245 | orchestrator | 2025-06-02 17:54:09 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:09.392540 | orchestrator | 2025-06-02 17:54:09 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:09.399091 | orchestrator | 2025-06-02 17:54:09 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:09.399235 | orchestrator | 2025-06-02 17:54:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:12.436620 | orchestrator | 2025-06-02 17:54:12 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:12.437818 | orchestrator | 2025-06-02 17:54:12 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:12.438646 | orchestrator | 2025-06-02 17:54:12 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:12.439566 | orchestrator | 2025-06-02 17:54:12 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:12.439680 | orchestrator | 2025-06-02 17:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:15.480849 | orchestrator | 2025-06-02 17:54:15 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:15.481146 | orchestrator | 2025-06-02 17:54:15 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:15.482205 | orchestrator | 2025-06-02 17:54:15 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:15.489778 | orchestrator | 2025-06-02 17:54:15 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:15.489873 | orchestrator | 2025-06-02 17:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:18.514724 | orchestrator | 2025-06-02 17:54:18 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:18.515356 | orchestrator | 2025-06-02 17:54:18 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:18.516391 | orchestrator | 2025-06-02 17:54:18 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:18.517097 | orchestrator | 2025-06-02 17:54:18 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:18.517310 | orchestrator | 2025-06-02 17:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:21.567412 | orchestrator | 2025-06-02 17:54:21 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:21.567779 | orchestrator | 2025-06-02 17:54:21 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:21.569578 | orchestrator | 2025-06-02 17:54:21 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:21.570232 | orchestrator | 2025-06-02 17:54:21 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:21.570288 | orchestrator | 2025-06-02 17:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:24.609017 | orchestrator | 2025-06-02 17:54:24 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:24.612348 | orchestrator | 2025-06-02 17:54:24 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:24.613270 | orchestrator | 2025-06-02 17:54:24 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:24.616507 | orchestrator | 2025-06-02 17:54:24 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:24.616556 | orchestrator | 2025-06-02 17:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:27.651310 | orchestrator | 2025-06-02 17:54:27 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:27.651537 | orchestrator | 2025-06-02 17:54:27 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:27.652057 | orchestrator | 2025-06-02 17:54:27 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:27.652670 | orchestrator | 2025-06-02 17:54:27 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:27.652894 | orchestrator | 2025-06-02 17:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:30.681383 | orchestrator | 2025-06-02 17:54:30 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:30.684413 | orchestrator | 2025-06-02 17:54:30 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:30.686654 | orchestrator | 2025-06-02 17:54:30 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state STARTED 2025-06-02 17:54:30.688384 | orchestrator | 2025-06-02 17:54:30 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:30.688562 | orchestrator | 2025-06-02 17:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:33.714880 | orchestrator | 2025-06-02 17:54:33 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:33.715062 | orchestrator | 2025-06-02 17:54:33 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:33.715754 | orchestrator | 2025-06-02 17:54:33 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:33.716989 | orchestrator | 2025-06-02 17:54:33 | INFO  | Task 4f4d3112-520b-45dc-8e7a-cfa0696113b9 is in state SUCCESS 2025-06-02 17:54:33.718097 | orchestrator | 2025-06-02 17:54:33.718175 | orchestrator | 2025-06-02 17:54:33.718186 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:54:33.718195 | orchestrator | 2025-06-02 17:54:33.718203 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:54:33.718212 | orchestrator | Monday 02 June 2025 17:52:31 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-02 17:54:33.718220 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:54:33.718230 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:54:33.718239 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:54:33.718247 | orchestrator | 2025-06-02 17:54:33.718255 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:54:33.718260 | orchestrator | Monday 02 June 2025 17:52:32 +0000 (0:00:00.310) 0:00:00.569 *********** 2025-06-02 17:54:33.718265 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-02 17:54:33.718271 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-02 17:54:33.718275 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-02 17:54:33.718280 | orchestrator | 2025-06-02 17:54:33.718285 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-02 17:54:33.718289 | orchestrator | 2025-06-02 17:54:33.718294 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:54:33.718299 | orchestrator | Monday 02 June 2025 17:52:32 +0000 (0:00:00.436) 0:00:01.006 *********** 2025-06-02 17:54:33.718304 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:54:33.718309 | orchestrator | 2025-06-02 17:54:33.718314 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-02 17:54:33.718318 | orchestrator | Monday 02 June 2025 17:52:33 +0000 (0:00:00.600) 0:00:01.607 *********** 2025-06-02 17:54:33.718323 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-02 17:54:33.718328 | orchestrator | 2025-06-02 17:54:33.718332 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-02 17:54:33.718337 | orchestrator | Monday 02 June 2025 17:52:36 +0000 (0:00:03.383) 0:00:04.991 *********** 2025-06-02 17:54:33.718341 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-02 17:54:33.718347 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-02 17:54:33.718643 | orchestrator | 2025-06-02 17:54:33.718657 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-02 17:54:33.718674 | orchestrator | Monday 02 June 2025 17:52:43 +0000 (0:00:06.571) 0:00:11.562 *********** 2025-06-02 17:54:33.718679 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:54:33.718684 | orchestrator | 2025-06-02 17:54:33.718688 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-02 17:54:33.718693 | orchestrator | Monday 02 June 2025 17:52:46 +0000 (0:00:03.530) 0:00:15.092 *********** 2025-06-02 17:54:33.718698 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:54:33.718703 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-02 17:54:33.718708 | orchestrator | 2025-06-02 17:54:33.718712 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-02 17:54:33.718717 | orchestrator | Monday 02 June 2025 17:52:50 +0000 (0:00:04.123) 0:00:19.216 *********** 2025-06-02 17:54:33.718734 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:54:33.718739 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-02 17:54:33.718744 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-02 17:54:33.718748 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-02 17:54:33.718753 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-02 17:54:33.718757 | orchestrator | 2025-06-02 17:54:33.718762 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-02 17:54:33.718766 | orchestrator | Monday 02 June 2025 17:53:06 +0000 (0:00:15.838) 0:00:35.054 *********** 2025-06-02 17:54:33.718771 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-02 17:54:33.718775 | orchestrator | 2025-06-02 17:54:33.718780 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-02 17:54:33.718784 | orchestrator | Monday 02 June 2025 17:53:10 +0000 (0:00:04.067) 0:00:39.121 *********** 2025-06-02 17:54:33.718792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.718813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.718821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.718835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.718897 | orchestrator | 2025-06-02 17:54:33.718904 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-02 17:54:33.718912 | orchestrator | Monday 02 June 2025 17:53:12 +0000 (0:00:02.050) 0:00:41.172 *********** 2025-06-02 17:54:33.718919 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-02 17:54:33.718927 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-02 17:54:33.718941 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-02 17:54:33.718949 | orchestrator | 2025-06-02 17:54:33.718957 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-02 17:54:33.718970 | orchestrator | Monday 02 June 2025 17:53:14 +0000 (0:00:01.849) 0:00:43.022 *********** 2025-06-02 17:54:33.718978 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.718987 | orchestrator | 2025-06-02 17:54:33.718992 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-02 17:54:33.718999 | orchestrator | Monday 02 June 2025 17:53:14 +0000 (0:00:00.169) 0:00:43.191 *********** 2025-06-02 17:54:33.719006 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.719013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.719020 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.719026 | orchestrator | 2025-06-02 17:54:33.719034 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:54:33.719041 | orchestrator | Monday 02 June 2025 17:53:15 +0000 (0:00:00.583) 0:00:43.775 *********** 2025-06-02 17:54:33.719048 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:54:33.719055 | orchestrator | 2025-06-02 17:54:33.719061 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-02 17:54:33.719068 | orchestrator | Monday 02 June 2025 17:53:15 +0000 (0:00:00.485) 0:00:44.261 *********** 2025-06-02 17:54:33.719076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719247 | orchestrator | 2025-06-02 17:54:33.719255 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-02 17:54:33.719262 | orchestrator | Monday 02 June 2025 17:53:20 +0000 (0:00:04.501) 0:00:48.763 *********** 2025-06-02 17:54:33.719272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719297 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.719310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719340 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.719351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719376 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.719384 | orchestrator | 2025-06-02 17:54:33.719392 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-02 17:54:33.719400 | orchestrator | Monday 02 June 2025 17:53:22 +0000 (0:00:01.697) 0:00:50.460 *********** 2025-06-02 17:54:33.719415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719456 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.719464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719473 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719489 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.719503 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719537 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.719544 | orchestrator | 2025-06-02 17:54:33.719551 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-02 17:54:33.719559 | orchestrator | Monday 02 June 2025 17:53:23 +0000 (0:00:01.468) 0:00:51.929 *********** 2025-06-02 17:54:33.719566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719629 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719663 | orchestrator | 2025-06-02 17:54:33.719670 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-02 17:54:33.719679 | orchestrator | Monday 02 June 2025 17:53:27 +0000 (0:00:03.999) 0:00:55.928 *********** 2025-06-02 17:54:33.719685 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.719690 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:54:33.719695 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:54:33.719699 | orchestrator | 2025-06-02 17:54:33.719705 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-02 17:54:33.719713 | orchestrator | Monday 02 June 2025 17:53:29 +0000 (0:00:01.919) 0:00:57.848 *********** 2025-06-02 17:54:33.719720 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:54:33.719727 | orchestrator | 2025-06-02 17:54:33.719735 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-02 17:54:33.719742 | orchestrator | Monday 02 June 2025 17:53:30 +0000 (0:00:01.106) 0:00:58.955 *********** 2025-06-02 17:54:33.719749 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.719757 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.719765 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.719772 | orchestrator | 2025-06-02 17:54:33.719780 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-02 17:54:33.719787 | orchestrator | Monday 02 June 2025 17:53:31 +0000 (0:00:00.636) 0:00:59.591 *********** 2025-06-02 17:54:33.719799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.719833 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.719950 | orchestrator | 2025-06-02 17:54:33.719955 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-02 17:54:33.719959 | orchestrator | Monday 02 June 2025 17:53:42 +0000 (0:00:11.673) 0:01:11.265 *********** 2025-06-02 17:54:33.719969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.719974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.719987 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.719991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.720000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.720008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.720013 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.720018 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-02 17:54:33.720026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.720031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:54:33.720039 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.720044 | orchestrator | 2025-06-02 17:54:33.720048 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-02 17:54:33.720053 | orchestrator | Monday 02 June 2025 17:53:43 +0000 (0:00:00.743) 0:01:12.008 *********** 2025-06-02 17:54:33.720058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.720066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.720071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-02 17:54:33.720079 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720083 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:54:33.720116 | orchestrator | 2025-06-02 17:54:33.720121 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-02 17:54:33.720151 | orchestrator | Monday 02 June 2025 17:53:47 +0000 (0:00:03.433) 0:01:15.442 *********** 2025-06-02 17:54:33.720161 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:54:33.720168 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:54:33.720175 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:54:33.720182 | orchestrator | 2025-06-02 17:54:33.720189 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-02 17:54:33.720196 | orchestrator | Monday 02 June 2025 17:53:47 +0000 (0:00:00.764) 0:01:16.206 *********** 2025-06-02 17:54:33.720203 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720210 | orchestrator | 2025-06-02 17:54:33.720217 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-02 17:54:33.720224 | orchestrator | Monday 02 June 2025 17:53:50 +0000 (0:00:02.483) 0:01:18.690 *********** 2025-06-02 17:54:33.720231 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720237 | orchestrator | 2025-06-02 17:54:33.720251 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-02 17:54:33.720261 | orchestrator | Monday 02 June 2025 17:53:52 +0000 (0:00:02.470) 0:01:21.160 *********** 2025-06-02 17:54:33.720268 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720274 | orchestrator | 2025-06-02 17:54:33.720281 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:54:33.720287 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:11.345) 0:01:32.506 *********** 2025-06-02 17:54:33.720294 | orchestrator | 2025-06-02 17:54:33.720301 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:54:33.720308 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:00.073) 0:01:32.579 *********** 2025-06-02 17:54:33.720315 | orchestrator | 2025-06-02 17:54:33.720322 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-02 17:54:33.720329 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:00.078) 0:01:32.658 *********** 2025-06-02 17:54:33.720337 | orchestrator | 2025-06-02 17:54:33.720344 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-02 17:54:33.720351 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:00.066) 0:01:32.725 *********** 2025-06-02 17:54:33.720358 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720365 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:54:33.720372 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:54:33.720380 | orchestrator | 2025-06-02 17:54:33.720387 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-02 17:54:33.720395 | orchestrator | Monday 02 June 2025 17:54:12 +0000 (0:00:08.141) 0:01:40.866 *********** 2025-06-02 17:54:33.720402 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:54:33.720410 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720418 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:54:33.720425 | orchestrator | 2025-06-02 17:54:33.720429 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-02 17:54:33.720434 | orchestrator | Monday 02 June 2025 17:54:23 +0000 (0:00:11.358) 0:01:52.225 *********** 2025-06-02 17:54:33.720438 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:54:33.720442 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:54:33.720447 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:54:33.720451 | orchestrator | 2025-06-02 17:54:33.720456 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:54:33.720462 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:54:33.720469 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:54:33.720477 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:54:33.720484 | orchestrator | 2025-06-02 17:54:33.720491 | orchestrator | 2025-06-02 17:54:33.720498 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:54:33.720505 | orchestrator | Monday 02 June 2025 17:54:31 +0000 (0:00:07.915) 0:02:00.140 *********** 2025-06-02 17:54:33.720512 | orchestrator | =============================================================================== 2025-06-02 17:54:33.720519 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.84s 2025-06-02 17:54:33.720531 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.67s 2025-06-02 17:54:33.720537 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 11.36s 2025-06-02 17:54:33.720544 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.35s 2025-06-02 17:54:33.720550 | orchestrator | barbican : Restart barbican-api container ------------------------------- 8.14s 2025-06-02 17:54:33.720557 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 7.92s 2025-06-02 17:54:33.720571 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.57s 2025-06-02 17:54:33.720578 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.50s 2025-06-02 17:54:33.720584 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.12s 2025-06-02 17:54:33.720591 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.07s 2025-06-02 17:54:33.720597 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.00s 2025-06-02 17:54:33.720604 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.53s 2025-06-02 17:54:33.720610 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.43s 2025-06-02 17:54:33.720617 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.38s 2025-06-02 17:54:33.720623 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.48s 2025-06-02 17:54:33.720630 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.47s 2025-06-02 17:54:33.720637 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.05s 2025-06-02 17:54:33.720643 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.92s 2025-06-02 17:54:33.720650 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.85s 2025-06-02 17:54:33.720657 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.70s 2025-06-02 17:54:33.720664 | orchestrator | 2025-06-02 17:54:33 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:33.720672 | orchestrator | 2025-06-02 17:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:36.757374 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:36.759005 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:36.759896 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:36.760857 | orchestrator | 2025-06-02 17:54:36 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:36.760941 | orchestrator | 2025-06-02 17:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:39.786874 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:39.786972 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:39.787365 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:39.787767 | orchestrator | 2025-06-02 17:54:39 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:39.787792 | orchestrator | 2025-06-02 17:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:42.818964 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:42.819599 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:42.820178 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:42.820629 | orchestrator | 2025-06-02 17:54:42 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:42.821044 | orchestrator | 2025-06-02 17:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:45.847012 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:45.847125 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:45.847625 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:45.848055 | orchestrator | 2025-06-02 17:54:45 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:45.848183 | orchestrator | 2025-06-02 17:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:48.874876 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:48.875081 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:48.875717 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:48.878767 | orchestrator | 2025-06-02 17:54:48 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:48.878799 | orchestrator | 2025-06-02 17:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:51.908081 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:51.910345 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:51.910519 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:51.911298 | orchestrator | 2025-06-02 17:54:51 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:51.911376 | orchestrator | 2025-06-02 17:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:54.931589 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:54.931683 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:54.932113 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:54.935395 | orchestrator | 2025-06-02 17:54:54 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:54.935457 | orchestrator | 2025-06-02 17:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:54:57.984968 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:54:57.985332 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:54:57.986316 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:54:57.987473 | orchestrator | 2025-06-02 17:54:57 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:54:57.987522 | orchestrator | 2025-06-02 17:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:01.029541 | orchestrator | 2025-06-02 17:55:01 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:01.029878 | orchestrator | 2025-06-02 17:55:01 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:01.031218 | orchestrator | 2025-06-02 17:55:01 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:01.032464 | orchestrator | 2025-06-02 17:55:01 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:01.032529 | orchestrator | 2025-06-02 17:55:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:04.067321 | orchestrator | 2025-06-02 17:55:04 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:04.067841 | orchestrator | 2025-06-02 17:55:04 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:04.068722 | orchestrator | 2025-06-02 17:55:04 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:04.069532 | orchestrator | 2025-06-02 17:55:04 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:04.069571 | orchestrator | 2025-06-02 17:55:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:07.103584 | orchestrator | 2025-06-02 17:55:07 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:07.105559 | orchestrator | 2025-06-02 17:55:07 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:07.106576 | orchestrator | 2025-06-02 17:55:07 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:07.107070 | orchestrator | 2025-06-02 17:55:07 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:07.107334 | orchestrator | 2025-06-02 17:55:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:10.142567 | orchestrator | 2025-06-02 17:55:10 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:10.142771 | orchestrator | 2025-06-02 17:55:10 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:10.143380 | orchestrator | 2025-06-02 17:55:10 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:10.144709 | orchestrator | 2025-06-02 17:55:10 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:10.144736 | orchestrator | 2025-06-02 17:55:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:13.185233 | orchestrator | 2025-06-02 17:55:13 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:13.185422 | orchestrator | 2025-06-02 17:55:13 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:13.186299 | orchestrator | 2025-06-02 17:55:13 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:13.186984 | orchestrator | 2025-06-02 17:55:13 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:13.187031 | orchestrator | 2025-06-02 17:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:16.224003 | orchestrator | 2025-06-02 17:55:16 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:16.224509 | orchestrator | 2025-06-02 17:55:16 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:16.225212 | orchestrator | 2025-06-02 17:55:16 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:16.226651 | orchestrator | 2025-06-02 17:55:16 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:16.226708 | orchestrator | 2025-06-02 17:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:19.260766 | orchestrator | 2025-06-02 17:55:19 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state STARTED 2025-06-02 17:55:19.261652 | orchestrator | 2025-06-02 17:55:19 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:19.262616 | orchestrator | 2025-06-02 17:55:19 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:19.263748 | orchestrator | 2025-06-02 17:55:19 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:19.263878 | orchestrator | 2025-06-02 17:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:22.314523 | orchestrator | 2025-06-02 17:55:22 | INFO  | Task b9282382-0e70-4854-964b-af2ba9f4cb63 is in state SUCCESS 2025-06-02 17:55:22.316190 | orchestrator | 2025-06-02 17:55:22 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:22.317513 | orchestrator | 2025-06-02 17:55:22 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:22.320072 | orchestrator | 2025-06-02 17:55:22 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:22.322374 | orchestrator | 2025-06-02 17:55:22 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:22.322701 | orchestrator | 2025-06-02 17:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:25.372366 | orchestrator | 2025-06-02 17:55:25 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:25.372522 | orchestrator | 2025-06-02 17:55:25 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:25.373633 | orchestrator | 2025-06-02 17:55:25 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:25.374426 | orchestrator | 2025-06-02 17:55:25 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:25.374471 | orchestrator | 2025-06-02 17:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:28.420169 | orchestrator | 2025-06-02 17:55:28 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:28.424627 | orchestrator | 2025-06-02 17:55:28 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:28.424778 | orchestrator | 2025-06-02 17:55:28 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:28.427054 | orchestrator | 2025-06-02 17:55:28 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:28.427118 | orchestrator | 2025-06-02 17:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:31.483744 | orchestrator | 2025-06-02 17:55:31 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:31.485611 | orchestrator | 2025-06-02 17:55:31 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:31.487486 | orchestrator | 2025-06-02 17:55:31 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:31.489071 | orchestrator | 2025-06-02 17:55:31 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:31.489144 | orchestrator | 2025-06-02 17:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:34.536124 | orchestrator | 2025-06-02 17:55:34 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:34.536259 | orchestrator | 2025-06-02 17:55:34 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:34.537976 | orchestrator | 2025-06-02 17:55:34 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:34.539493 | orchestrator | 2025-06-02 17:55:34 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:34.539531 | orchestrator | 2025-06-02 17:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:37.588131 | orchestrator | 2025-06-02 17:55:37 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:37.588242 | orchestrator | 2025-06-02 17:55:37 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:37.589506 | orchestrator | 2025-06-02 17:55:37 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:37.589590 | orchestrator | 2025-06-02 17:55:37 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:37.589607 | orchestrator | 2025-06-02 17:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:40.622047 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:40.622252 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:40.623095 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:40.623846 | orchestrator | 2025-06-02 17:55:40 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:40.623874 | orchestrator | 2025-06-02 17:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:43.678953 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:43.682146 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:43.684265 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:43.687570 | orchestrator | 2025-06-02 17:55:43 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:43.687656 | orchestrator | 2025-06-02 17:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:46.721309 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:46.721484 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:46.723286 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:46.723307 | orchestrator | 2025-06-02 17:55:46 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:46.723312 | orchestrator | 2025-06-02 17:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:49.778421 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:49.780545 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:49.781221 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:49.782207 | orchestrator | 2025-06-02 17:55:49 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:49.782249 | orchestrator | 2025-06-02 17:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:52.818574 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:52.819123 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:52.819753 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:52.820288 | orchestrator | 2025-06-02 17:55:52 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:52.820437 | orchestrator | 2025-06-02 17:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:55.855472 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:55.857838 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:55.859162 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:55.859487 | orchestrator | 2025-06-02 17:55:55 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:55.859745 | orchestrator | 2025-06-02 17:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:55:58.895974 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:55:58.896520 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:55:58.897563 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:55:58.901805 | orchestrator | 2025-06-02 17:55:58 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:55:58.901878 | orchestrator | 2025-06-02 17:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:01.930595 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:01.931878 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:01.933705 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:01.934741 | orchestrator | 2025-06-02 17:56:01 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:01.934784 | orchestrator | 2025-06-02 17:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:04.974370 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:04.975901 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:04.977173 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:04.978525 | orchestrator | 2025-06-02 17:56:04 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:04.978560 | orchestrator | 2025-06-02 17:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:08.031135 | orchestrator | 2025-06-02 17:56:08 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:08.031273 | orchestrator | 2025-06-02 17:56:08 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:08.033077 | orchestrator | 2025-06-02 17:56:08 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:08.036977 | orchestrator | 2025-06-02 17:56:08 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:08.037038 | orchestrator | 2025-06-02 17:56:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:11.087023 | orchestrator | 2025-06-02 17:56:11 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:11.088447 | orchestrator | 2025-06-02 17:56:11 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:11.093022 | orchestrator | 2025-06-02 17:56:11 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:11.094290 | orchestrator | 2025-06-02 17:56:11 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:11.094656 | orchestrator | 2025-06-02 17:56:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:14.124997 | orchestrator | 2025-06-02 17:56:14 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:14.125541 | orchestrator | 2025-06-02 17:56:14 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:14.126333 | orchestrator | 2025-06-02 17:56:14 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:14.129112 | orchestrator | 2025-06-02 17:56:14 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:14.129157 | orchestrator | 2025-06-02 17:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:17.169880 | orchestrator | 2025-06-02 17:56:17 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:17.172066 | orchestrator | 2025-06-02 17:56:17 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:17.175372 | orchestrator | 2025-06-02 17:56:17 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:17.178254 | orchestrator | 2025-06-02 17:56:17 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:17.178358 | orchestrator | 2025-06-02 17:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:20.237316 | orchestrator | 2025-06-02 17:56:20 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:20.238932 | orchestrator | 2025-06-02 17:56:20 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:20.240117 | orchestrator | 2025-06-02 17:56:20 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:20.243311 | orchestrator | 2025-06-02 17:56:20 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:20.243358 | orchestrator | 2025-06-02 17:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:23.286172 | orchestrator | 2025-06-02 17:56:23 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:23.288161 | orchestrator | 2025-06-02 17:56:23 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:23.290483 | orchestrator | 2025-06-02 17:56:23 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:23.292945 | orchestrator | 2025-06-02 17:56:23 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:23.293018 | orchestrator | 2025-06-02 17:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:26.330166 | orchestrator | 2025-06-02 17:56:26 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:26.330800 | orchestrator | 2025-06-02 17:56:26 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:26.330828 | orchestrator | 2025-06-02 17:56:26 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:26.331585 | orchestrator | 2025-06-02 17:56:26 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:26.331625 | orchestrator | 2025-06-02 17:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:29.367155 | orchestrator | 2025-06-02 17:56:29 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:29.367402 | orchestrator | 2025-06-02 17:56:29 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:29.367959 | orchestrator | 2025-06-02 17:56:29 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:29.368785 | orchestrator | 2025-06-02 17:56:29 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:29.368951 | orchestrator | 2025-06-02 17:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:32.413797 | orchestrator | 2025-06-02 17:56:32 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:32.415851 | orchestrator | 2025-06-02 17:56:32 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:32.417627 | orchestrator | 2025-06-02 17:56:32 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:32.418908 | orchestrator | 2025-06-02 17:56:32 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:32.418956 | orchestrator | 2025-06-02 17:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:35.471002 | orchestrator | 2025-06-02 17:56:35 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:35.475927 | orchestrator | 2025-06-02 17:56:35 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:35.478919 | orchestrator | 2025-06-02 17:56:35 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:35.481116 | orchestrator | 2025-06-02 17:56:35 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:35.481180 | orchestrator | 2025-06-02 17:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:38.535879 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:38.536927 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:38.538401 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:38.540200 | orchestrator | 2025-06-02 17:56:38 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:38.540259 | orchestrator | 2025-06-02 17:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:41.578445 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state STARTED 2025-06-02 17:56:41.578926 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:41.579665 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:41.580640 | orchestrator | 2025-06-02 17:56:41 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:41.580686 | orchestrator | 2025-06-02 17:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:44.616500 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 882430d8-e2b6-42e1-ab2b-a355f383bd65 is in state SUCCESS 2025-06-02 17:56:44.617672 | orchestrator | 2025-06-02 17:56:44.617724 | orchestrator | 2025-06-02 17:56:44.617749 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-02 17:56:44.617755 | orchestrator | 2025-06-02 17:56:44.617760 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-02 17:56:44.617764 | orchestrator | Monday 02 June 2025 17:54:42 +0000 (0:00:00.259) 0:00:00.259 *********** 2025-06-02 17:56:44.617784 | orchestrator | changed: [localhost] 2025-06-02 17:56:44.617789 | orchestrator | 2025-06-02 17:56:44.617793 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-02 17:56:44.617797 | orchestrator | Monday 02 June 2025 17:54:44 +0000 (0:00:02.292) 0:00:02.551 *********** 2025-06-02 17:56:44.617801 | orchestrator | changed: [localhost] 2025-06-02 17:56:44.617805 | orchestrator | 2025-06-02 17:56:44.617809 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-02 17:56:44.617813 | orchestrator | Monday 02 June 2025 17:55:13 +0000 (0:00:29.192) 0:00:31.744 *********** 2025-06-02 17:56:44.617816 | orchestrator | changed: [localhost] 2025-06-02 17:56:44.617820 | orchestrator | 2025-06-02 17:56:44.617824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:56:44.617828 | orchestrator | 2025-06-02 17:56:44.617831 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:56:44.617835 | orchestrator | Monday 02 June 2025 17:55:17 +0000 (0:00:04.014) 0:00:35.758 *********** 2025-06-02 17:56:44.617839 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:56:44.617843 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:56:44.617848 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:56:44.617854 | orchestrator | 2025-06-02 17:56:44.617860 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:56:44.617915 | orchestrator | Monday 02 June 2025 17:55:18 +0000 (0:00:00.666) 0:00:36.425 *********** 2025-06-02 17:56:44.617923 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-02 17:56:44.617930 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-02 17:56:44.617938 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-02 17:56:44.617945 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-02 17:56:44.617951 | orchestrator | 2025-06-02 17:56:44.617958 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-02 17:56:44.617965 | orchestrator | skipping: no hosts matched 2025-06-02 17:56:44.617973 | orchestrator | 2025-06-02 17:56:44.617981 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:56:44.617985 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:56:44.617991 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:56:44.617997 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:56:44.618001 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:56:44.618005 | orchestrator | 2025-06-02 17:56:44.618008 | orchestrator | 2025-06-02 17:56:44.618040 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:56:44.618046 | orchestrator | Monday 02 June 2025 17:55:18 +0000 (0:00:00.715) 0:00:37.140 *********** 2025-06-02 17:56:44.618050 | orchestrator | =============================================================================== 2025-06-02 17:56:44.618054 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 29.19s 2025-06-02 17:56:44.618058 | orchestrator | Download ironic-agent kernel -------------------------------------------- 4.01s 2025-06-02 17:56:44.618062 | orchestrator | Ensure the destination directory exists --------------------------------- 2.29s 2025-06-02 17:56:44.618066 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2025-06-02 17:56:44.618070 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.67s 2025-06-02 17:56:44.618074 | orchestrator | 2025-06-02 17:56:44.618077 | orchestrator | 2025-06-02 17:56:44.618081 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:56:44.618091 | orchestrator | 2025-06-02 17:56:44.618095 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:56:44.618099 | orchestrator | Monday 02 June 2025 17:55:24 +0000 (0:00:00.262) 0:00:00.262 *********** 2025-06-02 17:56:44.618103 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:56:44.618107 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:56:44.618111 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:56:44.618114 | orchestrator | 2025-06-02 17:56:44.618118 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:56:44.618122 | orchestrator | Monday 02 June 2025 17:55:24 +0000 (0:00:00.302) 0:00:00.565 *********** 2025-06-02 17:56:44.618125 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-02 17:56:44.618129 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-02 17:56:44.618133 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-02 17:56:44.618137 | orchestrator | 2025-06-02 17:56:44.618140 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-02 17:56:44.618145 | orchestrator | 2025-06-02 17:56:44.618151 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:56:44.618160 | orchestrator | Monday 02 June 2025 17:55:25 +0000 (0:00:00.495) 0:00:01.060 *********** 2025-06-02 17:56:44.618169 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:56:44.618175 | orchestrator | 2025-06-02 17:56:44.618181 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-02 17:56:44.618187 | orchestrator | Monday 02 June 2025 17:55:26 +0000 (0:00:01.074) 0:00:02.135 *********** 2025-06-02 17:56:44.618229 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-02 17:56:44.618235 | orchestrator | 2025-06-02 17:56:44.618239 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-02 17:56:44.618242 | orchestrator | Monday 02 June 2025 17:55:30 +0000 (0:00:04.139) 0:00:06.275 *********** 2025-06-02 17:56:44.618247 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-02 17:56:44.618252 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-02 17:56:44.618256 | orchestrator | 2025-06-02 17:56:44.618261 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-02 17:56:44.618265 | orchestrator | Monday 02 June 2025 17:55:37 +0000 (0:00:06.939) 0:00:13.215 *********** 2025-06-02 17:56:44.618270 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:56:44.618275 | orchestrator | 2025-06-02 17:56:44.618280 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-02 17:56:44.618284 | orchestrator | Monday 02 June 2025 17:55:40 +0000 (0:00:03.379) 0:00:16.594 *********** 2025-06-02 17:56:44.618289 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:56:44.618293 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-02 17:56:44.618298 | orchestrator | 2025-06-02 17:56:44.618302 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-02 17:56:44.618306 | orchestrator | Monday 02 June 2025 17:55:45 +0000 (0:00:04.182) 0:00:20.777 *********** 2025-06-02 17:56:44.618311 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:56:44.618315 | orchestrator | 2025-06-02 17:56:44.618320 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-02 17:56:44.618324 | orchestrator | Monday 02 June 2025 17:55:48 +0000 (0:00:03.534) 0:00:24.312 *********** 2025-06-02 17:56:44.618329 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-02 17:56:44.618333 | orchestrator | 2025-06-02 17:56:44.618338 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:56:44.618342 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:04.412) 0:00:28.725 *********** 2025-06-02 17:56:44.618352 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:44.618366 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:44.618372 | orchestrator | 2025-06-02 17:56:44.618378 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-02 17:56:44.618384 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:00.341) 0:00:29.067 *********** 2025-06-02 17:56:44.618394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618425 | orchestrator | 2025-06-02 17:56:44.618431 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-02 17:56:44.618436 | orchestrator | Monday 02 June 2025 17:55:54 +0000 (0:00:00.988) 0:00:30.056 *********** 2025-06-02 17:56:44.618442 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618448 | orchestrator | 2025-06-02 17:56:44.618454 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-02 17:56:44.618460 | orchestrator | Monday 02 June 2025 17:55:54 +0000 (0:00:00.234) 0:00:30.290 *********** 2025-06-02 17:56:44.618467 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618473 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:44.618479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:44.618485 | orchestrator | 2025-06-02 17:56:44.618491 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-02 17:56:44.618502 | orchestrator | Monday 02 June 2025 17:55:55 +0000 (0:00:00.756) 0:00:31.046 *********** 2025-06-02 17:56:44.618509 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:56:44.618515 | orchestrator | 2025-06-02 17:56:44.618521 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-02 17:56:44.618526 | orchestrator | Monday 02 June 2025 17:55:56 +0000 (0:00:00.638) 0:00:31.685 *********** 2025-06-02 17:56:44.618533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618553 | orchestrator | 2025-06-02 17:56:44.618564 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-02 17:56:44.618569 | orchestrator | Monday 02 June 2025 17:55:58 +0000 (0:00:02.292) 0:00:33.978 *********** 2025-06-02 17:56:44.618576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618594 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618600 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:44.618608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618615 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:44.618621 | orchestrator | 2025-06-02 17:56:44.618628 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-02 17:56:44.618635 | orchestrator | Monday 02 June 2025 17:55:59 +0000 (0:00:01.272) 0:00:35.251 *********** 2025-06-02 17:56:44.618641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618648 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618672 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:44.618679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618685 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:44.618691 | orchestrator | 2025-06-02 17:56:44.618698 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-02 17:56:44.618704 | orchestrator | Monday 02 June 2025 17:56:00 +0000 (0:00:00.985) 0:00:36.236 *********** 2025-06-02 17:56:44.618711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618742 | orchestrator | 2025-06-02 17:56:44.618750 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-02 17:56:44.618755 | orchestrator | Monday 02 June 2025 17:56:02 +0000 (0:00:01.497) 0:00:37.734 *********** 2025-06-02 17:56:44.618759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618771 | orchestrator | 2025-06-02 17:56:44.618775 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-02 17:56:44.618779 | orchestrator | Monday 02 June 2025 17:56:04 +0000 (0:00:02.833) 0:00:40.567 *********** 2025-06-02 17:56:44.618783 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:56:44.618787 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:56:44.618794 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-02 17:56:44.618798 | orchestrator | 2025-06-02 17:56:44.618801 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-02 17:56:44.618808 | orchestrator | Monday 02 June 2025 17:56:07 +0000 (0:00:02.144) 0:00:42.712 *********** 2025-06-02 17:56:44.618812 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:44.618816 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:44.618820 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:44.618824 | orchestrator | 2025-06-02 17:56:44.618828 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-02 17:56:44.618832 | orchestrator | Monday 02 June 2025 17:56:09 +0000 (0:00:02.226) 0:00:44.939 *********** 2025-06-02 17:56:44.618839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618848 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:44.618858 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618864 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:44.618871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-02 17:56:44.618915 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:44.618924 | orchestrator | 2025-06-02 17:56:44.618930 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-02 17:56:44.618936 | orchestrator | Monday 02 June 2025 17:56:11 +0000 (0:00:01.939) 0:00:46.879 *********** 2025-06-02 17:56:44.618956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-02 17:56:44.618977 | orchestrator | 2025-06-02 17:56:44.618983 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-02 17:56:44.618989 | orchestrator | Monday 02 June 2025 17:56:13 +0000 (0:00:02.741) 0:00:49.620 *********** 2025-06-02 17:56:44.618995 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:44.618999 | orchestrator | 2025-06-02 17:56:44.619002 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-02 17:56:44.619006 | orchestrator | Monday 02 June 2025 17:56:16 +0000 (0:00:02.438) 0:00:52.058 *********** 2025-06-02 17:56:44.619010 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:44.619014 | orchestrator | 2025-06-02 17:56:44.619018 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-02 17:56:44.619021 | orchestrator | Monday 02 June 2025 17:56:18 +0000 (0:00:02.338) 0:00:54.397 *********** 2025-06-02 17:56:44.619025 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:44.619029 | orchestrator | 2025-06-02 17:56:44.619032 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:56:44.619036 | orchestrator | Monday 02 June 2025 17:56:35 +0000 (0:00:16.635) 0:01:11.032 *********** 2025-06-02 17:56:44.619040 | orchestrator | 2025-06-02 17:56:44.619044 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:56:44.619051 | orchestrator | Monday 02 June 2025 17:56:35 +0000 (0:00:00.105) 0:01:11.138 *********** 2025-06-02 17:56:44.619055 | orchestrator | 2025-06-02 17:56:44.619059 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-02 17:56:44.619062 | orchestrator | Monday 02 June 2025 17:56:35 +0000 (0:00:00.087) 0:01:11.225 *********** 2025-06-02 17:56:44.619066 | orchestrator | 2025-06-02 17:56:44.619070 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-02 17:56:44.619073 | orchestrator | Monday 02 June 2025 17:56:35 +0000 (0:00:00.072) 0:01:11.297 *********** 2025-06-02 17:56:44.619077 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:44.619081 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:44.619085 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:44.619089 | orchestrator | 2025-06-02 17:56:44.619093 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:56:44.619097 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:56:44.619102 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:56:44.619106 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:56:44.619110 | orchestrator | 2025-06-02 17:56:44.619114 | orchestrator | 2025-06-02 17:56:44.619118 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:56:44.619122 | orchestrator | Monday 02 June 2025 17:56:43 +0000 (0:00:07.487) 0:01:18.785 *********** 2025-06-02 17:56:44.619125 | orchestrator | =============================================================================== 2025-06-02 17:56:44.619132 | orchestrator | placement : Running placement bootstrap container ---------------------- 16.64s 2025-06-02 17:56:44.619136 | orchestrator | placement : Restart placement-api container ----------------------------- 7.49s 2025-06-02 17:56:44.619140 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.94s 2025-06-02 17:56:44.619145 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.41s 2025-06-02 17:56:44.619151 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.18s 2025-06-02 17:56:44.619161 | orchestrator | service-ks-register : placement | Creating services --------------------- 4.14s 2025-06-02 17:56:44.619168 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.54s 2025-06-02 17:56:44.619174 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.38s 2025-06-02 17:56:44.619180 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.83s 2025-06-02 17:56:44.619185 | orchestrator | placement : Check placement containers ---------------------------------- 2.74s 2025-06-02 17:56:44.619192 | orchestrator | placement : Creating placement databases -------------------------------- 2.44s 2025-06-02 17:56:44.619199 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.34s 2025-06-02 17:56:44.619229 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.29s 2025-06-02 17:56:44.619236 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.23s 2025-06-02 17:56:44.619240 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 2.14s 2025-06-02 17:56:44.619244 | orchestrator | placement : Copying over existing policy file --------------------------- 1.94s 2025-06-02 17:56:44.619248 | orchestrator | placement : Copying over config.json files for services ----------------- 1.50s 2025-06-02 17:56:44.619252 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 1.27s 2025-06-02 17:56:44.619257 | orchestrator | placement : include_tasks ----------------------------------------------- 1.08s 2025-06-02 17:56:44.619260 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.99s 2025-06-02 17:56:44.619269 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:44.619344 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:44.619351 | orchestrator | 2025-06-02 17:56:44 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:44.619355 | orchestrator | 2025-06-02 17:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:47.642885 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:56:47.643048 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:47.643898 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:47.644352 | orchestrator | 2025-06-02 17:56:47 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:47.644375 | orchestrator | 2025-06-02 17:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:50.674992 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:56:50.676903 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:50.678397 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:50.680946 | orchestrator | 2025-06-02 17:56:50 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:50.681007 | orchestrator | 2025-06-02 17:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:53.726495 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:56:53.727920 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:53.729191 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:53.731323 | orchestrator | 2025-06-02 17:56:53 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:53.731366 | orchestrator | 2025-06-02 17:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:56.780436 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:56:56.782412 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:56.784687 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:56.786343 | orchestrator | 2025-06-02 17:56:56 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state STARTED 2025-06-02 17:56:56.786373 | orchestrator | 2025-06-02 17:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:59.834314 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:56:59.836973 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:56:59.839277 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:56:59.840530 | orchestrator | 2025-06-02 17:56:59 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:56:59.847590 | orchestrator | 2025-06-02 17:56:59.847659 | orchestrator | 2025-06-02 17:56:59.847665 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:56:59.847670 | orchestrator | 2025-06-02 17:56:59.847675 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:56:59.847681 | orchestrator | Monday 02 June 2025 17:53:40 +0000 (0:00:00.894) 0:00:00.894 *********** 2025-06-02 17:56:59.847688 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:56:59.847695 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:56:59.847701 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:56:59.847706 | orchestrator | 2025-06-02 17:56:59.847713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:56:59.847719 | orchestrator | Monday 02 June 2025 17:53:40 +0000 (0:00:00.344) 0:00:01.238 *********** 2025-06-02 17:56:59.847726 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-02 17:56:59.847732 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-02 17:56:59.847739 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-02 17:56:59.847775 | orchestrator | 2025-06-02 17:56:59.847780 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-02 17:56:59.847784 | orchestrator | 2025-06-02 17:56:59.847788 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:56:59.847792 | orchestrator | Monday 02 June 2025 17:53:41 +0000 (0:00:00.845) 0:00:02.084 *********** 2025-06-02 17:56:59.847805 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:56:59.847810 | orchestrator | 2025-06-02 17:56:59.847814 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-02 17:56:59.847818 | orchestrator | Monday 02 June 2025 17:53:42 +0000 (0:00:00.889) 0:00:02.973 *********** 2025-06-02 17:56:59.847822 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-02 17:56:59.847826 | orchestrator | 2025-06-02 17:56:59.847839 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-02 17:56:59.847843 | orchestrator | Monday 02 June 2025 17:53:46 +0000 (0:00:03.372) 0:00:06.346 *********** 2025-06-02 17:56:59.847847 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-02 17:56:59.847851 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-02 17:56:59.847855 | orchestrator | 2025-06-02 17:56:59.847858 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-02 17:56:59.847862 | orchestrator | Monday 02 June 2025 17:53:52 +0000 (0:00:06.538) 0:00:12.884 *********** 2025-06-02 17:56:59.847866 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:56:59.847870 | orchestrator | 2025-06-02 17:56:59.847874 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-02 17:56:59.847878 | orchestrator | Monday 02 June 2025 17:53:55 +0000 (0:00:03.265) 0:00:16.150 *********** 2025-06-02 17:56:59.847882 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:56:59.847886 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-02 17:56:59.847889 | orchestrator | 2025-06-02 17:56:59.847893 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-02 17:56:59.847897 | orchestrator | Monday 02 June 2025 17:54:00 +0000 (0:00:04.457) 0:00:20.608 *********** 2025-06-02 17:56:59.847901 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:56:59.847905 | orchestrator | 2025-06-02 17:56:59.847909 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-02 17:56:59.847913 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:03.770) 0:00:24.378 *********** 2025-06-02 17:56:59.847916 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-02 17:56:59.847920 | orchestrator | 2025-06-02 17:56:59.847926 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-02 17:56:59.847952 | orchestrator | Monday 02 June 2025 17:54:08 +0000 (0:00:04.535) 0:00:28.914 *********** 2025-06-02 17:56:59.847963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.847991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.847996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.848001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848125 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848171 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848199 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848241 | orchestrator | 2025-06-02 17:56:59.848246 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-02 17:56:59.848250 | orchestrator | Monday 02 June 2025 17:54:12 +0000 (0:00:04.138) 0:00:33.052 *********** 2025-06-02 17:56:59.848255 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.848259 | orchestrator | 2025-06-02 17:56:59.848264 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-02 17:56:59.848268 | orchestrator | Monday 02 June 2025 17:54:13 +0000 (0:00:00.249) 0:00:33.302 *********** 2025-06-02 17:56:59.848273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.848277 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.848281 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.848285 | orchestrator | 2025-06-02 17:56:59.848290 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:56:59.848294 | orchestrator | Monday 02 June 2025 17:54:13 +0000 (0:00:00.550) 0:00:33.852 *********** 2025-06-02 17:56:59.848298 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:56:59.848303 | orchestrator | 2025-06-02 17:56:59.848307 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-02 17:56:59.848312 | orchestrator | Monday 02 June 2025 17:54:15 +0000 (0:00:01.719) 0:00:35.572 *********** 2025-06-02 17:56:59.848319 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.848331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.848338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.848351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848403 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.848441 | orchestrator | 2025-06-02 17:56:59.848446 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-02 17:56:59.848450 | orchestrator | Monday 02 June 2025 17:54:22 +0000 (0:00:07.442) 0:00:43.014 *********** 2025-06-02 17:56:59.848455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.848459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.848467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848471 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848497 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.848503 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.848511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.848523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 're2025-06-02 17:56:59 | INFO  | Task 3a84e346-c99d-4702-be95-569ba4ad6108 is in state SUCCESS 2025-06-02 17:56:59.848707 | orchestrator | gistry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848759 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.848766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.848870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.848886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.848921 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.848967 | orchestrator | 2025-06-02 17:56:59.848974 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-02 17:56:59.848979 | orchestrator | Monday 02 June 2025 17:54:24 +0000 (0:00:02.015) 0:00:45.029 *********** 2025-06-02 17:56:59.848983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.848987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.848996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849345 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.849352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.849385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.849410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849444 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.849448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849452 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.849455 | orchestrator | 2025-06-02 17:56:59.849459 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-02 17:56:59.849463 | orchestrator | Monday 02 June 2025 17:54:27 +0000 (0:00:02.584) 0:00:47.613 *********** 2025-06-02 17:56:59.849467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849536 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849585 | orchestrator | 2025-06-02 17:56:59.849589 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-02 17:56:59.849607 | orchestrator | Monday 02 June 2025 17:54:35 +0000 (0:00:07.777) 0:00:55.390 *********** 2025-06-02 17:56:59.849613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla2025-06-02 17:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:56:59.849619 | orchestrator | /', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.849643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849651 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849680 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849717 | orchestrator | 2025-06-02 17:56:59.849721 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-02 17:56:59.849725 | orchestrator | Monday 02 June 2025 17:54:56 +0000 (0:00:20.910) 0:01:16.300 *********** 2025-06-02 17:56:59.849729 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:56:59.849733 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:56:59.849737 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-02 17:56:59.849741 | orchestrator | 2025-06-02 17:56:59.849747 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-02 17:56:59.849753 | orchestrator | Monday 02 June 2025 17:55:04 +0000 (0:00:08.173) 0:01:24.474 *********** 2025-06-02 17:56:59.849760 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:56:59.849766 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:56:59.849773 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-02 17:56:59.849780 | orchestrator | 2025-06-02 17:56:59.849787 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-02 17:56:59.849794 | orchestrator | Monday 02 June 2025 17:55:09 +0000 (0:00:04.935) 0:01:29.409 *********** 2025-06-02 17:56:59.849801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849907 | orchestrator | 2025-06-02 17:56:59.849911 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-02 17:56:59.849915 | orchestrator | Monday 02 June 2025 17:55:12 +0000 (0:00:03.547) 0:01:32.956 *********** 2025-06-02 17:56:59.849922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.849937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.849988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.849997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850054 | orchestrator | 2025-06-02 17:56:59.850058 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:56:59.850067 | orchestrator | Monday 02 June 2025 17:55:16 +0000 (0:00:03.345) 0:01:36.302 *********** 2025-06-02 17:56:59.850071 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.850076 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.850080 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.850084 | orchestrator | 2025-06-02 17:56:59.850088 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-02 17:56:59.850093 | orchestrator | Monday 02 June 2025 17:55:16 +0000 (0:00:00.462) 0:01:36.764 *********** 2025-06-02 17:56:59.850097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.850101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.850105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850129 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.850133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.850137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.850141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850164 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.850168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-02 17:56:59.850172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-02 17:56:59.850176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-02 17:56:59.850266 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.850272 | orchestrator | 2025-06-02 17:56:59.850278 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-02 17:56:59.850284 | orchestrator | Monday 02 June 2025 17:55:17 +0000 (0:00:00.893) 0:01:37.657 *********** 2025-06-02 17:56:59.850291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.850298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.850304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-02 17:56:59.850312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850341 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850353 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-02 17:56:59.850391 | orchestrator | 2025-06-02 17:56:59.850395 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-02 17:56:59.850408 | orchestrator | Monday 02 June 2025 17:55:22 +0000 (0:00:05.607) 0:01:43.265 *********** 2025-06-02 17:56:59.850412 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:56:59.850416 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:56:59.850425 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:56:59.850429 | orchestrator | 2025-06-02 17:56:59.850433 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-02 17:56:59.850437 | orchestrator | Monday 02 June 2025 17:55:23 +0000 (0:00:00.371) 0:01:43.636 *********** 2025-06-02 17:56:59.850441 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-02 17:56:59.850445 | orchestrator | 2025-06-02 17:56:59.850449 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-02 17:56:59.850453 | orchestrator | Monday 02 June 2025 17:55:26 +0000 (0:00:03.053) 0:01:46.689 *********** 2025-06-02 17:56:59.850456 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:56:59.850460 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-02 17:56:59.850464 | orchestrator | 2025-06-02 17:56:59.850468 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-02 17:56:59.850471 | orchestrator | Monday 02 June 2025 17:55:29 +0000 (0:00:02.684) 0:01:49.374 *********** 2025-06-02 17:56:59.850475 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850479 | orchestrator | 2025-06-02 17:56:59.850483 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:56:59.850487 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:17.379) 0:02:06.753 *********** 2025-06-02 17:56:59.850490 | orchestrator | 2025-06-02 17:56:59.850494 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:56:59.850498 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:00.137) 0:02:06.890 *********** 2025-06-02 17:56:59.850502 | orchestrator | 2025-06-02 17:56:59.850505 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-02 17:56:59.850509 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:00.066) 0:02:06.956 *********** 2025-06-02 17:56:59.850513 | orchestrator | 2025-06-02 17:56:59.850517 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-02 17:56:59.850521 | orchestrator | Monday 02 June 2025 17:55:46 +0000 (0:00:00.065) 0:02:07.022 *********** 2025-06-02 17:56:59.850524 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850528 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850532 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850536 | orchestrator | 2025-06-02 17:56:59.850539 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-02 17:56:59.850543 | orchestrator | Monday 02 June 2025 17:55:55 +0000 (0:00:09.125) 0:02:16.148 *********** 2025-06-02 17:56:59.850547 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850551 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850554 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850558 | orchestrator | 2025-06-02 17:56:59.850562 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-02 17:56:59.850570 | orchestrator | Monday 02 June 2025 17:56:08 +0000 (0:00:12.946) 0:02:29.094 *********** 2025-06-02 17:56:59.850574 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850580 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850585 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850591 | orchestrator | 2025-06-02 17:56:59.850596 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-02 17:56:59.850602 | orchestrator | Monday 02 June 2025 17:56:24 +0000 (0:00:15.843) 0:02:44.938 *********** 2025-06-02 17:56:59.850608 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850614 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850620 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850627 | orchestrator | 2025-06-02 17:56:59.850632 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-02 17:56:59.850636 | orchestrator | Monday 02 June 2025 17:56:34 +0000 (0:00:09.940) 0:02:54.879 *********** 2025-06-02 17:56:59.850640 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850644 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850647 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850651 | orchestrator | 2025-06-02 17:56:59.850655 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-02 17:56:59.850658 | orchestrator | Monday 02 June 2025 17:56:41 +0000 (0:00:07.093) 0:03:01.972 *********** 2025-06-02 17:56:59.850662 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850666 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:56:59.850670 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:56:59.850673 | orchestrator | 2025-06-02 17:56:59.850677 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-02 17:56:59.850681 | orchestrator | Monday 02 June 2025 17:56:51 +0000 (0:00:09.761) 0:03:11.734 *********** 2025-06-02 17:56:59.850685 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:56:59.850688 | orchestrator | 2025-06-02 17:56:59.850692 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:56:59.850696 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:56:59.850701 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:56:59.850708 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:56:59.850712 | orchestrator | 2025-06-02 17:56:59.850716 | orchestrator | 2025-06-02 17:56:59.850719 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:56:59.850723 | orchestrator | Monday 02 June 2025 17:56:58 +0000 (0:00:07.051) 0:03:18.786 *********** 2025-06-02 17:56:59.850727 | orchestrator | =============================================================================== 2025-06-02 17:56:59.850731 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.91s 2025-06-02 17:56:59.850734 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.38s 2025-06-02 17:56:59.850738 | orchestrator | designate : Restart designate-central container ------------------------ 15.84s 2025-06-02 17:56:59.850742 | orchestrator | designate : Restart designate-api container ---------------------------- 12.95s 2025-06-02 17:56:59.850746 | orchestrator | designate : Restart designate-producer container ------------------------ 9.94s 2025-06-02 17:56:59.850749 | orchestrator | designate : Restart designate-worker container -------------------------- 9.76s 2025-06-02 17:56:59.850753 | orchestrator | designate : Restart designate-backend-bind9 container ------------------- 9.13s 2025-06-02 17:56:59.850757 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.17s 2025-06-02 17:56:59.850760 | orchestrator | designate : Copying over config.json files for services ----------------- 7.78s 2025-06-02 17:56:59.850768 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.44s 2025-06-02 17:56:59.850771 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.09s 2025-06-02 17:56:59.850775 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.05s 2025-06-02 17:56:59.850779 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.54s 2025-06-02 17:56:59.850783 | orchestrator | designate : Check designate containers ---------------------------------- 5.61s 2025-06-02 17:56:59.850786 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.94s 2025-06-02 17:56:59.850790 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.54s 2025-06-02 17:56:59.850794 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.46s 2025-06-02 17:56:59.850798 | orchestrator | designate : Ensuring config directories exist --------------------------- 4.14s 2025-06-02 17:56:59.850802 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.77s 2025-06-02 17:56:59.850805 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.55s 2025-06-02 17:57:02.881701 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:02.884633 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:02.885024 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:57:02.887517 | orchestrator | 2025-06-02 17:57:02 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:02.887569 | orchestrator | 2025-06-02 17:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:05.919500 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:05.920193 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:05.920980 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state STARTED 2025-06-02 17:57:05.921973 | orchestrator | 2025-06-02 17:57:05 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:05.922011 | orchestrator | 2025-06-02 17:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:08.960486 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:08.962695 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:08.966907 | orchestrator | 2025-06-02 17:57:08.966985 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 64f72f04-e3fd-4a1d-b15e-a077b741b206 is in state SUCCESS 2025-06-02 17:57:08.968420 | orchestrator | 2025-06-02 17:57:08.968493 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:57:08.968505 | orchestrator | 2025-06-02 17:57:08.968513 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:57:08.968522 | orchestrator | Monday 02 June 2025 17:52:25 +0000 (0:00:00.256) 0:00:00.256 *********** 2025-06-02 17:57:08.968530 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:08.968538 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:08.968546 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:08.968553 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:08.968561 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:08.968568 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:08.968576 | orchestrator | 2025-06-02 17:57:08.968583 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:57:08.968591 | orchestrator | Monday 02 June 2025 17:52:26 +0000 (0:00:00.722) 0:00:00.978 *********** 2025-06-02 17:57:08.968624 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-02 17:57:08.968634 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-02 17:57:08.968642 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-02 17:57:08.968649 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-02 17:57:08.968656 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-02 17:57:08.968663 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-02 17:57:08.968670 | orchestrator | 2025-06-02 17:57:08.968677 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-02 17:57:08.968685 | orchestrator | 2025-06-02 17:57:08.968692 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:57:08.968699 | orchestrator | Monday 02 June 2025 17:52:27 +0000 (0:00:00.631) 0:00:01.610 *********** 2025-06-02 17:57:08.968708 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:57:08.968716 | orchestrator | 2025-06-02 17:57:08.968723 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-02 17:57:08.968731 | orchestrator | Monday 02 June 2025 17:52:28 +0000 (0:00:01.287) 0:00:02.897 *********** 2025-06-02 17:57:08.968738 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:08.968745 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:08.968752 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:08.968759 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:08.968766 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:08.968773 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:08.968781 | orchestrator | 2025-06-02 17:57:08.968788 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-02 17:57:08.968794 | orchestrator | Monday 02 June 2025 17:52:29 +0000 (0:00:01.353) 0:00:04.250 *********** 2025-06-02 17:57:08.968801 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:08.968808 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:08.968816 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:08.968823 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:08.968830 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:08.968837 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:08.968842 | orchestrator | 2025-06-02 17:57:08.968847 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-02 17:57:08.968851 | orchestrator | Monday 02 June 2025 17:52:30 +0000 (0:00:01.158) 0:00:05.408 *********** 2025-06-02 17:57:08.968856 | orchestrator | ok: [testbed-node-0] => { 2025-06-02 17:57:08.968861 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968865 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.968869 | orchestrator | } 2025-06-02 17:57:08.968874 | orchestrator | ok: [testbed-node-1] => { 2025-06-02 17:57:08.968881 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968888 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.968894 | orchestrator | } 2025-06-02 17:57:08.968901 | orchestrator | ok: [testbed-node-2] => { 2025-06-02 17:57:08.968908 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968915 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.968921 | orchestrator | } 2025-06-02 17:57:08.968928 | orchestrator | ok: [testbed-node-3] => { 2025-06-02 17:57:08.968935 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968943 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.968951 | orchestrator | } 2025-06-02 17:57:08.968958 | orchestrator | ok: [testbed-node-4] => { 2025-06-02 17:57:08.968965 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968973 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.968979 | orchestrator | } 2025-06-02 17:57:08.968985 | orchestrator | ok: [testbed-node-5] => { 2025-06-02 17:57:08.968990 | orchestrator |  "changed": false, 2025-06-02 17:57:08.968995 | orchestrator |  "msg": "All assertions passed" 2025-06-02 17:57:08.969009 | orchestrator | } 2025-06-02 17:57:08.969016 | orchestrator | 2025-06-02 17:57:08.969024 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-02 17:57:08.969031 | orchestrator | Monday 02 June 2025 17:52:31 +0000 (0:00:00.818) 0:00:06.226 *********** 2025-06-02 17:57:08.969491 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.969506 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.969512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.969520 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.969527 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.969534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.969541 | orchestrator | 2025-06-02 17:57:08.969549 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-02 17:57:08.969557 | orchestrator | Monday 02 June 2025 17:52:32 +0000 (0:00:00.618) 0:00:06.845 *********** 2025-06-02 17:57:08.969565 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-02 17:57:08.969572 | orchestrator | 2025-06-02 17:57:08.969580 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-02 17:57:08.969587 | orchestrator | Monday 02 June 2025 17:52:35 +0000 (0:00:03.468) 0:00:10.314 *********** 2025-06-02 17:57:08.969594 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-02 17:57:08.969604 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-02 17:57:08.969611 | orchestrator | 2025-06-02 17:57:08.969700 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-02 17:57:08.969712 | orchestrator | Monday 02 June 2025 17:52:42 +0000 (0:00:06.486) 0:00:16.800 *********** 2025-06-02 17:57:08.969720 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:57:08.969727 | orchestrator | 2025-06-02 17:57:08.969735 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-02 17:57:08.969742 | orchestrator | Monday 02 June 2025 17:52:45 +0000 (0:00:03.431) 0:00:20.232 *********** 2025-06-02 17:57:08.969750 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:57:08.969757 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-02 17:57:08.969764 | orchestrator | 2025-06-02 17:57:08.969771 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-02 17:57:08.969778 | orchestrator | Monday 02 June 2025 17:52:49 +0000 (0:00:04.076) 0:00:24.309 *********** 2025-06-02 17:57:08.969785 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:57:08.969792 | orchestrator | 2025-06-02 17:57:08.969798 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-02 17:57:08.969803 | orchestrator | Monday 02 June 2025 17:52:53 +0000 (0:00:03.276) 0:00:27.585 *********** 2025-06-02 17:57:08.969807 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-02 17:57:08.969812 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-02 17:57:08.969816 | orchestrator | 2025-06-02 17:57:08.969821 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:57:08.969825 | orchestrator | Monday 02 June 2025 17:53:00 +0000 (0:00:07.817) 0:00:35.403 *********** 2025-06-02 17:57:08.969829 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.969834 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.969838 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.969842 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.969847 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.969851 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.969855 | orchestrator | 2025-06-02 17:57:08.969859 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-02 17:57:08.969864 | orchestrator | Monday 02 June 2025 17:53:01 +0000 (0:00:00.631) 0:00:36.034 *********** 2025-06-02 17:57:08.969868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.969883 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.969887 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.969891 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.969895 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.969900 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.969904 | orchestrator | 2025-06-02 17:57:08.969908 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-02 17:57:08.969977 | orchestrator | Monday 02 June 2025 17:53:03 +0000 (0:00:01.806) 0:00:37.840 *********** 2025-06-02 17:57:08.969983 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:57:08.969988 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:57:08.969993 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:57:08.969997 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:57:08.970002 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:57:08.970006 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:57:08.970010 | orchestrator | 2025-06-02 17:57:08.970051 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-02 17:57:08.970056 | orchestrator | Monday 02 June 2025 17:53:05 +0000 (0:00:01.646) 0:00:39.487 *********** 2025-06-02 17:57:08.970061 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970065 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970069 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970074 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970078 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970082 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970086 | orchestrator | 2025-06-02 17:57:08.970091 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-02 17:57:08.970095 | orchestrator | Monday 02 June 2025 17:53:07 +0000 (0:00:01.945) 0:00:41.432 *********** 2025-06-02 17:57:08.970103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970141 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970161 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970171 | orchestrator | 2025-06-02 17:57:08.970175 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-02 17:57:08.970179 | orchestrator | Monday 02 June 2025 17:53:09 +0000 (0:00:02.767) 0:00:44.200 *********** 2025-06-02 17:57:08.970184 | orchestrator | [WARNING]: Skipped 2025-06-02 17:57:08.970189 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-02 17:57:08.970194 | orchestrator | due to this access issue: 2025-06-02 17:57:08.970198 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-02 17:57:08.970202 | orchestrator | a directory 2025-06-02 17:57:08.970246 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:57:08.970251 | orchestrator | 2025-06-02 17:57:08.970255 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:57:08.970276 | orchestrator | Monday 02 June 2025 17:53:10 +0000 (0:00:00.760) 0:00:44.960 *********** 2025-06-02 17:57:08.970282 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:57:08.970288 | orchestrator | 2025-06-02 17:57:08.970292 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-02 17:57:08.970302 | orchestrator | Monday 02 June 2025 17:53:11 +0000 (0:00:01.350) 0:00:46.311 *********** 2025-06-02 17:57:08.970307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970321 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970360 | orchestrator | 2025-06-02 17:57:08.970368 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-02 17:57:08.970375 | orchestrator | Monday 02 June 2025 17:53:15 +0000 (0:00:04.055) 0:00:50.366 *********** 2025-06-02 17:57:08.970384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970391 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970406 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970416 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970424 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970477 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970492 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970506 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970512 | orchestrator | 2025-06-02 17:57:08.970519 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-02 17:57:08.970526 | orchestrator | Monday 02 June 2025 17:53:18 +0000 (0:00:02.914) 0:00:53.281 *********** 2025-06-02 17:57:08.970533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970541 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970584 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970603 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970607 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970616 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970632 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970636 | orchestrator | 2025-06-02 17:57:08.970642 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-02 17:57:08.970647 | orchestrator | Monday 02 June 2025 17:53:23 +0000 (0:00:04.857) 0:00:58.139 *********** 2025-06-02 17:57:08.970652 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970657 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970662 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970667 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970672 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970677 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970682 | orchestrator | 2025-06-02 17:57:08.970687 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-02 17:57:08.970696 | orchestrator | Monday 02 June 2025 17:53:27 +0000 (0:00:03.660) 0:01:01.799 *********** 2025-06-02 17:57:08.970701 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970706 | orchestrator | 2025-06-02 17:57:08.970711 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-02 17:57:08.970717 | orchestrator | Monday 02 June 2025 17:53:27 +0000 (0:00:00.126) 0:01:01.925 *********** 2025-06-02 17:57:08.970722 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970727 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970732 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970737 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970742 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970747 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970752 | orchestrator | 2025-06-02 17:57:08.970757 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-02 17:57:08.970762 | orchestrator | Monday 02 June 2025 17:53:28 +0000 (0:00:00.688) 0:01:02.614 *********** 2025-06-02 17:57:08.970767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.970778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970793 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.970798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.970804 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.970816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970821 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.970827 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970833 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.970841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.970848 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.970856 | orchestrator | 2025-06-02 17:57:08.970863 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-02 17:57:08.970870 | orchestrator | Monday 02 June 2025 17:53:30 +0000 (0:00:02.704) 0:01:05.319 *********** 2025-06-02 17:57:08.970877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970917 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970934 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970947 | orchestrator | 2025-06-02 17:57:08.970956 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-02 17:57:08.970964 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:04.403) 0:01:09.722 *********** 2025-06-02 17:57:08.970974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.970985 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970991 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.970996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971005 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.971009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971014 | orchestrator | 2025-06-02 17:57:08.971018 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-02 17:57:08.971023 | orchestrator | Monday 02 June 2025 17:53:43 +0000 (0:00:08.041) 0:01:17.764 *********** 2025-06-02 17:57:08.971033 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971038 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971048 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971061 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971076 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971096 | orchestrator | 2025-06-02 17:57:08.971103 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-02 17:57:08.971111 | orchestrator | Monday 02 June 2025 17:53:46 +0000 (0:00:03.182) 0:01:20.947 *********** 2025-06-02 17:57:08.971119 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971126 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971133 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:08.971140 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971147 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:08.971155 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:08.971160 | orchestrator | 2025-06-02 17:57:08.971169 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-02 17:57:08.971173 | orchestrator | Monday 02 June 2025 17:53:49 +0000 (0:00:03.012) 0:01:23.960 *********** 2025-06-02 17:57:08.971178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971183 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971192 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971201 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971242 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.971262 | orchestrator | 2025-06-02 17:57:08.971269 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-02 17:57:08.971277 | orchestrator | Monday 02 June 2025 17:53:53 +0000 (0:00:03.742) 0:01:27.702 *********** 2025-06-02 17:57:08.971283 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971289 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971296 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971304 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971312 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971320 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971325 | orchestrator | 2025-06-02 17:57:08.971330 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-02 17:57:08.971334 | orchestrator | Monday 02 June 2025 17:53:55 +0000 (0:00:02.389) 0:01:30.092 *********** 2025-06-02 17:57:08.971339 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971344 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971352 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971359 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971366 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971373 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971380 | orchestrator | 2025-06-02 17:57:08.971387 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-02 17:57:08.971393 | orchestrator | Monday 02 June 2025 17:53:58 +0000 (0:00:02.449) 0:01:32.541 *********** 2025-06-02 17:57:08.971401 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971408 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971415 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971422 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971429 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971436 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971442 | orchestrator | 2025-06-02 17:57:08.971450 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-02 17:57:08.971457 | orchestrator | Monday 02 June 2025 17:54:01 +0000 (0:00:03.883) 0:01:36.424 *********** 2025-06-02 17:57:08.971464 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971471 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971478 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971484 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971491 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971498 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971505 | orchestrator | 2025-06-02 17:57:08.971513 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-02 17:57:08.971528 | orchestrator | Monday 02 June 2025 17:54:04 +0000 (0:00:02.445) 0:01:38.870 *********** 2025-06-02 17:57:08.971539 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971549 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971556 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971562 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971569 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971576 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971582 | orchestrator | 2025-06-02 17:57:08.971597 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-02 17:57:08.971605 | orchestrator | Monday 02 June 2025 17:54:07 +0000 (0:00:03.556) 0:01:42.426 *********** 2025-06-02 17:57:08.971612 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971620 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971627 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971633 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971639 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971647 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971654 | orchestrator | 2025-06-02 17:57:08.971661 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-02 17:57:08.971668 | orchestrator | Monday 02 June 2025 17:54:11 +0000 (0:00:03.059) 0:01:45.486 *********** 2025-06-02 17:57:08.971676 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971684 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971692 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971700 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971707 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971714 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971719 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971724 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971728 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971733 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971737 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-02 17:57:08.971742 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971746 | orchestrator | 2025-06-02 17:57:08.971751 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-02 17:57:08.971755 | orchestrator | Monday 02 June 2025 17:54:14 +0000 (0:00:03.658) 0:01:49.144 *********** 2025-06-02 17:57:08.971760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971766 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971780 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971795 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971805 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.971809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971814 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971826 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.971831 | orchestrator | 2025-06-02 17:57:08.971835 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-02 17:57:08.971840 | orchestrator | Monday 02 June 2025 17:54:17 +0000 (0:00:03.254) 0:01:52.399 *********** 2025-06-02 17:57:08.971844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971849 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.971859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971863 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.971868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.971873 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.971877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971887 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.971905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.971910 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972047 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.972083 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972091 | orchestrator | 2025-06-02 17:57:08.972099 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-02 17:57:08.972108 | orchestrator | Monday 02 June 2025 17:54:21 +0000 (0:00:03.062) 0:01:55.462 *********** 2025-06-02 17:57:08.972116 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972123 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972131 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972139 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972146 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972299 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972313 | orchestrator | 2025-06-02 17:57:08.972321 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-02 17:57:08.972335 | orchestrator | Monday 02 June 2025 17:54:25 +0000 (0:00:04.168) 0:01:59.630 *********** 2025-06-02 17:57:08.972343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972350 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972357 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972364 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:08.972371 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:08.972378 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:08.972385 | orchestrator | 2025-06-02 17:57:08.972391 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-02 17:57:08.972398 | orchestrator | Monday 02 June 2025 17:54:31 +0000 (0:00:06.001) 0:02:05.632 *********** 2025-06-02 17:57:08.972405 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972412 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972420 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972427 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972434 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972441 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972448 | orchestrator | 2025-06-02 17:57:08.972457 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-02 17:57:08.972467 | orchestrator | Monday 02 June 2025 17:54:33 +0000 (0:00:02.718) 0:02:08.350 *********** 2025-06-02 17:57:08.972479 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972504 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972514 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972520 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972528 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972534 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972541 | orchestrator | 2025-06-02 17:57:08.972549 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-02 17:57:08.972556 | orchestrator | Monday 02 June 2025 17:54:37 +0000 (0:00:03.803) 0:02:12.154 *********** 2025-06-02 17:57:08.972639 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972648 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972656 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972665 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972672 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972680 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972687 | orchestrator | 2025-06-02 17:57:08.972695 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-02 17:57:08.972702 | orchestrator | Monday 02 June 2025 17:54:41 +0000 (0:00:03.766) 0:02:15.921 *********** 2025-06-02 17:57:08.972710 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972718 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972725 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972732 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972739 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972746 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972753 | orchestrator | 2025-06-02 17:57:08.972759 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-02 17:57:08.972767 | orchestrator | Monday 02 June 2025 17:54:45 +0000 (0:00:04.116) 0:02:20.037 *********** 2025-06-02 17:57:08.972773 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972780 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972787 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972794 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972802 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972809 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972817 | orchestrator | 2025-06-02 17:57:08.972824 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-02 17:57:08.972832 | orchestrator | Monday 02 June 2025 17:54:48 +0000 (0:00:03.046) 0:02:23.084 *********** 2025-06-02 17:57:08.972840 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972845 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972849 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972854 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972859 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972863 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972868 | orchestrator | 2025-06-02 17:57:08.972875 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-02 17:57:08.972882 | orchestrator | Monday 02 June 2025 17:54:51 +0000 (0:00:03.203) 0:02:26.288 *********** 2025-06-02 17:57:08.972888 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972893 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972898 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972902 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972907 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972912 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972916 | orchestrator | 2025-06-02 17:57:08.972920 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-02 17:57:08.972925 | orchestrator | Monday 02 June 2025 17:54:54 +0000 (0:00:03.139) 0:02:29.427 *********** 2025-06-02 17:57:08.972931 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.972936 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.972941 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.972946 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972960 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.972965 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.972971 | orchestrator | 2025-06-02 17:57:08.972976 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-02 17:57:08.972981 | orchestrator | Monday 02 June 2025 17:54:58 +0000 (0:00:03.531) 0:02:32.958 *********** 2025-06-02 17:57:08.972987 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.972993 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.972998 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.973003 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.973019 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.973024 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.973036 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.973042 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.973047 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.973052 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.973057 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-02 17:57:08.973062 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.973067 | orchestrator | 2025-06-02 17:57:08.973073 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-02 17:57:08.973078 | orchestrator | Monday 02 June 2025 17:55:03 +0000 (0:00:04.595) 0:02:37.554 *********** 2025-06-02 17:57:08.973085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.973092 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.973098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.973103 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.973108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-02 17:57:08.973118 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.973129 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.973135 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.973143 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.973147 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.973152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-02 17:57:08.973157 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.973161 | orchestrator | 2025-06-02 17:57:08.973166 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-02 17:57:08.973170 | orchestrator | Monday 02 June 2025 17:55:06 +0000 (0:00:02.931) 0:02:40.486 *********** 2025-06-02 17:57:08.973175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.973184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.973199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-02 17:57:08.973234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.973242 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.973247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-02 17:57:08.973256 | orchestrator | 2025-06-02 17:57:08.973261 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-02 17:57:08.973265 | orchestrator | Monday 02 June 2025 17:55:11 +0000 (0:00:04.967) 0:02:45.454 *********** 2025-06-02 17:57:08.973270 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:57:08.973274 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:57:08.973278 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:57:08.973283 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:57:08.973287 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:57:08.973291 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:57:08.973296 | orchestrator | 2025-06-02 17:57:08.973300 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-02 17:57:08.973305 | orchestrator | Monday 02 June 2025 17:55:11 +0000 (0:00:00.809) 0:02:46.263 *********** 2025-06-02 17:57:08.973309 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:08.973314 | orchestrator | 2025-06-02 17:57:08.973318 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-02 17:57:08.973322 | orchestrator | Monday 02 June 2025 17:55:14 +0000 (0:00:02.544) 0:02:48.808 *********** 2025-06-02 17:57:08.973327 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:08.973331 | orchestrator | 2025-06-02 17:57:08.973335 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-02 17:57:08.973340 | orchestrator | Monday 02 June 2025 17:55:17 +0000 (0:00:02.671) 0:02:51.480 *********** 2025-06-02 17:57:08.973344 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:08.973349 | orchestrator | 2025-06-02 17:57:08.973357 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973364 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:46.153) 0:03:37.633 *********** 2025-06-02 17:57:08.973376 | orchestrator | 2025-06-02 17:57:08.973384 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973391 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.096) 0:03:37.730 *********** 2025-06-02 17:57:08.973398 | orchestrator | 2025-06-02 17:57:08.973406 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973418 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.342) 0:03:38.073 *********** 2025-06-02 17:57:08.973426 | orchestrator | 2025-06-02 17:57:08.973433 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973443 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.072) 0:03:38.145 *********** 2025-06-02 17:57:08.973450 | orchestrator | 2025-06-02 17:57:08.973457 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973463 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.069) 0:03:38.214 *********** 2025-06-02 17:57:08.973471 | orchestrator | 2025-06-02 17:57:08.973478 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-02 17:57:08.973485 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.065) 0:03:38.280 *********** 2025-06-02 17:57:08.973493 | orchestrator | 2025-06-02 17:57:08.973500 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-02 17:57:08.973507 | orchestrator | Monday 02 June 2025 17:56:03 +0000 (0:00:00.068) 0:03:38.349 *********** 2025-06-02 17:57:08.973514 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:57:08.973524 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:57:08.973528 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:57:08.973533 | orchestrator | 2025-06-02 17:57:08.973537 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-02 17:57:08.973547 | orchestrator | Monday 02 June 2025 17:56:39 +0000 (0:00:35.541) 0:04:13.891 *********** 2025-06-02 17:57:08.973551 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:57:08.973556 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:57:08.973560 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:57:08.973565 | orchestrator | 2025-06-02 17:57:08.973569 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:57:08.973574 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-02 17:57:08.973580 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 17:57:08.973585 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-02 17:57:08.973590 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:57:08.973594 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:57:08.973598 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-02 17:57:08.973602 | orchestrator | 2025-06-02 17:57:08.973607 | orchestrator | 2025-06-02 17:57:08.973611 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:57:08.973616 | orchestrator | Monday 02 June 2025 17:57:05 +0000 (0:00:26.533) 0:04:40.424 *********** 2025-06-02 17:57:08.973621 | orchestrator | =============================================================================== 2025-06-02 17:57:08.973625 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.15s 2025-06-02 17:57:08.973629 | orchestrator | neutron : Restart neutron-server container ----------------------------- 35.54s 2025-06-02 17:57:08.973634 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 26.53s 2025-06-02 17:57:08.973638 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.04s 2025-06-02 17:57:08.973642 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.82s 2025-06-02 17:57:08.973647 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.49s 2025-06-02 17:57:08.973651 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 6.00s 2025-06-02 17:57:08.973656 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.97s 2025-06-02 17:57:08.973660 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.86s 2025-06-02 17:57:08.973664 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 4.60s 2025-06-02 17:57:08.973668 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.40s 2025-06-02 17:57:08.973673 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.17s 2025-06-02 17:57:08.973677 | orchestrator | neutron : Copying over bgp_dragent.ini ---------------------------------- 4.12s 2025-06-02 17:57:08.973682 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.08s 2025-06-02 17:57:08.973686 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.06s 2025-06-02 17:57:08.973691 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.88s 2025-06-02 17:57:08.973695 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 3.80s 2025-06-02 17:57:08.973699 | orchestrator | neutron : Copying over ironic_neutron_agent.ini ------------------------- 3.77s 2025-06-02 17:57:08.973704 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.74s 2025-06-02 17:57:08.973708 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.66s 2025-06-02 17:57:08.973721 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:08.973726 | orchestrator | 2025-06-02 17:57:08 | INFO  | Task 1e907186-85db-4c86-bbea-c28ecc74e366 is in state STARTED 2025-06-02 17:57:08.973734 | orchestrator | 2025-06-02 17:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:12.030155 | orchestrator | 2025-06-02 17:57:12 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:12.030961 | orchestrator | 2025-06-02 17:57:12 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:12.031865 | orchestrator | 2025-06-02 17:57:12 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:12.033109 | orchestrator | 2025-06-02 17:57:12 | INFO  | Task 1e907186-85db-4c86-bbea-c28ecc74e366 is in state STARTED 2025-06-02 17:57:12.033166 | orchestrator | 2025-06-02 17:57:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:15.072342 | orchestrator | 2025-06-02 17:57:15 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:15.075835 | orchestrator | 2025-06-02 17:57:15 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:15.078822 | orchestrator | 2025-06-02 17:57:15 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:15.079122 | orchestrator | 2025-06-02 17:57:15 | INFO  | Task 1e907186-85db-4c86-bbea-c28ecc74e366 is in state SUCCESS 2025-06-02 17:57:15.079319 | orchestrator | 2025-06-02 17:57:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:18.126830 | orchestrator | 2025-06-02 17:57:18 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:18.127383 | orchestrator | 2025-06-02 17:57:18 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:18.128811 | orchestrator | 2025-06-02 17:57:18 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:18.130786 | orchestrator | 2025-06-02 17:57:18 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:18.130844 | orchestrator | 2025-06-02 17:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:21.170918 | orchestrator | 2025-06-02 17:57:21 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:21.172479 | orchestrator | 2025-06-02 17:57:21 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:21.174355 | orchestrator | 2025-06-02 17:57:21 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:21.176942 | orchestrator | 2025-06-02 17:57:21 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:21.177098 | orchestrator | 2025-06-02 17:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:24.219380 | orchestrator | 2025-06-02 17:57:24 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:24.224686 | orchestrator | 2025-06-02 17:57:24 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:24.224754 | orchestrator | 2025-06-02 17:57:24 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:24.228000 | orchestrator | 2025-06-02 17:57:24 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:24.228066 | orchestrator | 2025-06-02 17:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:27.274684 | orchestrator | 2025-06-02 17:57:27 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:27.278004 | orchestrator | 2025-06-02 17:57:27 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:27.280070 | orchestrator | 2025-06-02 17:57:27 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:27.281321 | orchestrator | 2025-06-02 17:57:27 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:27.281695 | orchestrator | 2025-06-02 17:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:30.325681 | orchestrator | 2025-06-02 17:57:30 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:30.327667 | orchestrator | 2025-06-02 17:57:30 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:30.328756 | orchestrator | 2025-06-02 17:57:30 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:30.330356 | orchestrator | 2025-06-02 17:57:30 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:30.330406 | orchestrator | 2025-06-02 17:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:33.371030 | orchestrator | 2025-06-02 17:57:33 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:33.372131 | orchestrator | 2025-06-02 17:57:33 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:33.377002 | orchestrator | 2025-06-02 17:57:33 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:33.377822 | orchestrator | 2025-06-02 17:57:33 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:33.377874 | orchestrator | 2025-06-02 17:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:36.419411 | orchestrator | 2025-06-02 17:57:36 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:36.423341 | orchestrator | 2025-06-02 17:57:36 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:36.424667 | orchestrator | 2025-06-02 17:57:36 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:36.426094 | orchestrator | 2025-06-02 17:57:36 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:36.426317 | orchestrator | 2025-06-02 17:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:39.475325 | orchestrator | 2025-06-02 17:57:39 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:39.475991 | orchestrator | 2025-06-02 17:57:39 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:39.476496 | orchestrator | 2025-06-02 17:57:39 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:39.477801 | orchestrator | 2025-06-02 17:57:39 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:39.477855 | orchestrator | 2025-06-02 17:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:42.519308 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:42.520591 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:42.521836 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:42.523127 | orchestrator | 2025-06-02 17:57:42 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:42.523192 | orchestrator | 2025-06-02 17:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:45.567773 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:45.574467 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:45.577791 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:45.578004 | orchestrator | 2025-06-02 17:57:45 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:45.578191 | orchestrator | 2025-06-02 17:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:48.619365 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:48.620053 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:48.620878 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:48.621671 | orchestrator | 2025-06-02 17:57:48 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:48.621696 | orchestrator | 2025-06-02 17:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:51.656021 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:51.657447 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:51.659690 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:51.662117 | orchestrator | 2025-06-02 17:57:51 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:51.662154 | orchestrator | 2025-06-02 17:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:54.718711 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:54.720732 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:54.721663 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:54.723339 | orchestrator | 2025-06-02 17:57:54 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:54.723400 | orchestrator | 2025-06-02 17:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:57:57.787360 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:57:57.788555 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:57:57.791331 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:57:57.793050 | orchestrator | 2025-06-02 17:57:57 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:57:57.793110 | orchestrator | 2025-06-02 17:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:00.843428 | orchestrator | 2025-06-02 17:58:00 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:00.845355 | orchestrator | 2025-06-02 17:58:00 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:00.847173 | orchestrator | 2025-06-02 17:58:00 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:00.848677 | orchestrator | 2025-06-02 17:58:00 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:00.848858 | orchestrator | 2025-06-02 17:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:03.901052 | orchestrator | 2025-06-02 17:58:03 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:03.902907 | orchestrator | 2025-06-02 17:58:03 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:03.904641 | orchestrator | 2025-06-02 17:58:03 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:03.906558 | orchestrator | 2025-06-02 17:58:03 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:03.906819 | orchestrator | 2025-06-02 17:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:06.957734 | orchestrator | 2025-06-02 17:58:06 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:06.960419 | orchestrator | 2025-06-02 17:58:06 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:06.962738 | orchestrator | 2025-06-02 17:58:06 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:06.964297 | orchestrator | 2025-06-02 17:58:06 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:06.964355 | orchestrator | 2025-06-02 17:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:10.014010 | orchestrator | 2025-06-02 17:58:10 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:10.016418 | orchestrator | 2025-06-02 17:58:10 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:10.020076 | orchestrator | 2025-06-02 17:58:10 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:10.022835 | orchestrator | 2025-06-02 17:58:10 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:10.022911 | orchestrator | 2025-06-02 17:58:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:13.063319 | orchestrator | 2025-06-02 17:58:13 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:13.066963 | orchestrator | 2025-06-02 17:58:13 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:13.067037 | orchestrator | 2025-06-02 17:58:13 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:13.067046 | orchestrator | 2025-06-02 17:58:13 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:13.067055 | orchestrator | 2025-06-02 17:58:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:16.099532 | orchestrator | 2025-06-02 17:58:16 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:16.100356 | orchestrator | 2025-06-02 17:58:16 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:16.103580 | orchestrator | 2025-06-02 17:58:16 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:16.104328 | orchestrator | 2025-06-02 17:58:16 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:16.104374 | orchestrator | 2025-06-02 17:58:16 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:19.154625 | orchestrator | 2025-06-02 17:58:19 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:19.155105 | orchestrator | 2025-06-02 17:58:19 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:19.155819 | orchestrator | 2025-06-02 17:58:19 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:19.156855 | orchestrator | 2025-06-02 17:58:19 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:19.156901 | orchestrator | 2025-06-02 17:58:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:22.188772 | orchestrator | 2025-06-02 17:58:22 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:22.192315 | orchestrator | 2025-06-02 17:58:22 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:22.194984 | orchestrator | 2025-06-02 17:58:22 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:22.196796 | orchestrator | 2025-06-02 17:58:22 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:22.196863 | orchestrator | 2025-06-02 17:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:25.253853 | orchestrator | 2025-06-02 17:58:25 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:25.255019 | orchestrator | 2025-06-02 17:58:25 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:25.255893 | orchestrator | 2025-06-02 17:58:25 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:25.257141 | orchestrator | 2025-06-02 17:58:25 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:25.257187 | orchestrator | 2025-06-02 17:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:28.307081 | orchestrator | 2025-06-02 17:58:28 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:28.309344 | orchestrator | 2025-06-02 17:58:28 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:28.312292 | orchestrator | 2025-06-02 17:58:28 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:28.314630 | orchestrator | 2025-06-02 17:58:28 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:28.314676 | orchestrator | 2025-06-02 17:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:31.354476 | orchestrator | 2025-06-02 17:58:31 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:31.355876 | orchestrator | 2025-06-02 17:58:31 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:31.358810 | orchestrator | 2025-06-02 17:58:31 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:31.359918 | orchestrator | 2025-06-02 17:58:31 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:31.359958 | orchestrator | 2025-06-02 17:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:34.406266 | orchestrator | 2025-06-02 17:58:34 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:34.408226 | orchestrator | 2025-06-02 17:58:34 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:34.410656 | orchestrator | 2025-06-02 17:58:34 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:34.412909 | orchestrator | 2025-06-02 17:58:34 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:34.412966 | orchestrator | 2025-06-02 17:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:37.461281 | orchestrator | 2025-06-02 17:58:37 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:37.461365 | orchestrator | 2025-06-02 17:58:37 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:37.462312 | orchestrator | 2025-06-02 17:58:37 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:37.463337 | orchestrator | 2025-06-02 17:58:37 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:37.463383 | orchestrator | 2025-06-02 17:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:40.497779 | orchestrator | 2025-06-02 17:58:40 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:40.499323 | orchestrator | 2025-06-02 17:58:40 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:40.501808 | orchestrator | 2025-06-02 17:58:40 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:40.501858 | orchestrator | 2025-06-02 17:58:40 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:40.501869 | orchestrator | 2025-06-02 17:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:43.544579 | orchestrator | 2025-06-02 17:58:43 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:43.546999 | orchestrator | 2025-06-02 17:58:43 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:43.547459 | orchestrator | 2025-06-02 17:58:43 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:43.548487 | orchestrator | 2025-06-02 17:58:43 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:43.548521 | orchestrator | 2025-06-02 17:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:46.579327 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:46.579609 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:46.580367 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:46.580883 | orchestrator | 2025-06-02 17:58:46 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:46.581098 | orchestrator | 2025-06-02 17:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:49.622682 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:49.622897 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state STARTED 2025-06-02 17:58:49.625229 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:49.625864 | orchestrator | 2025-06-02 17:58:49 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:49.625898 | orchestrator | 2025-06-02 17:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:52.648578 | orchestrator | 2025-06-02 17:58:52 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:52.650669 | orchestrator | 2025-06-02 17:58:52.650735 | orchestrator | 2025-06-02 17:58:52.650745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:58:52.650774 | orchestrator | 2025-06-02 17:58:52.650781 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:58:52.650788 | orchestrator | Monday 02 June 2025 17:57:12 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-02 17:58:52.650795 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:52.650867 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:52.650876 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:52.650883 | orchestrator | 2025-06-02 17:58:52.650890 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:58:52.650896 | orchestrator | Monday 02 June 2025 17:57:12 +0000 (0:00:00.325) 0:00:00.507 *********** 2025-06-02 17:58:52.650904 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 17:58:52.650915 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 17:58:52.650926 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 17:58:52.650935 | orchestrator | 2025-06-02 17:58:52.650946 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-02 17:58:52.650957 | orchestrator | 2025-06-02 17:58:52.650967 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-02 17:58:52.650978 | orchestrator | Monday 02 June 2025 17:57:13 +0000 (0:00:00.624) 0:00:01.132 *********** 2025-06-02 17:58:52.650988 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:52.650999 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:52.651011 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:52.651018 | orchestrator | 2025-06-02 17:58:52.651025 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:58:52.651032 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:58:52.651053 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:58:52.651060 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:58:52.651067 | orchestrator | 2025-06-02 17:58:52.651073 | orchestrator | 2025-06-02 17:58:52.651080 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:58:52.651087 | orchestrator | Monday 02 June 2025 17:57:13 +0000 (0:00:00.772) 0:00:01.904 *********** 2025-06-02 17:58:52.651093 | orchestrator | =============================================================================== 2025-06-02 17:58:52.651100 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.77s 2025-06-02 17:58:52.651106 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.62s 2025-06-02 17:58:52.651113 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-06-02 17:58:52.651120 | orchestrator | 2025-06-02 17:58:52.651126 | orchestrator | 2025-06-02 17:58:52.651133 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:58:52.651139 | orchestrator | 2025-06-02 17:58:52.651146 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:58:52.651152 | orchestrator | Monday 02 June 2025 17:56:49 +0000 (0:00:00.245) 0:00:00.245 *********** 2025-06-02 17:58:52.651159 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:52.651165 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:52.651172 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:52.651178 | orchestrator | 2025-06-02 17:58:52.651185 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:58:52.651197 | orchestrator | Monday 02 June 2025 17:56:49 +0000 (0:00:00.224) 0:00:00.470 *********** 2025-06-02 17:58:52.651230 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-02 17:58:52.651238 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-02 17:58:52.651246 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-02 17:58:52.651254 | orchestrator | 2025-06-02 17:58:52.651262 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-02 17:58:52.651278 | orchestrator | 2025-06-02 17:58:52.651285 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:58:52.651293 | orchestrator | Monday 02 June 2025 17:56:50 +0000 (0:00:00.416) 0:00:00.886 *********** 2025-06-02 17:58:52.651301 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:58:52.651308 | orchestrator | 2025-06-02 17:58:52.651316 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-02 17:58:52.651324 | orchestrator | Monday 02 June 2025 17:56:50 +0000 (0:00:00.459) 0:00:01.346 *********** 2025-06-02 17:58:52.651332 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-02 17:58:52.651340 | orchestrator | 2025-06-02 17:58:52.651348 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-02 17:58:52.651355 | orchestrator | Monday 02 June 2025 17:56:54 +0000 (0:00:03.667) 0:00:05.014 *********** 2025-06-02 17:58:52.651363 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-02 17:58:52.651371 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-02 17:58:52.651379 | orchestrator | 2025-06-02 17:58:52.651386 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-02 17:58:52.651393 | orchestrator | Monday 02 June 2025 17:57:00 +0000 (0:00:06.378) 0:00:11.393 *********** 2025-06-02 17:58:52.651401 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:58:52.651409 | orchestrator | 2025-06-02 17:58:52.651416 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-02 17:58:52.651424 | orchestrator | Monday 02 June 2025 17:57:04 +0000 (0:00:03.485) 0:00:14.878 *********** 2025-06-02 17:58:52.651447 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:58:52.651455 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-02 17:58:52.651463 | orchestrator | 2025-06-02 17:58:52.651489 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-02 17:58:52.651498 | orchestrator | Monday 02 June 2025 17:57:08 +0000 (0:00:04.258) 0:00:19.137 *********** 2025-06-02 17:58:52.651506 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:58:52.651514 | orchestrator | 2025-06-02 17:58:52.651522 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-02 17:58:52.651530 | orchestrator | Monday 02 June 2025 17:57:12 +0000 (0:00:03.496) 0:00:22.633 *********** 2025-06-02 17:58:52.651537 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-02 17:58:52.651545 | orchestrator | 2025-06-02 17:58:52.651553 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-02 17:58:52.651560 | orchestrator | Monday 02 June 2025 17:57:15 +0000 (0:00:03.525) 0:00:26.158 *********** 2025-06-02 17:58:52.651568 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.651576 | orchestrator | 2025-06-02 17:58:52.651584 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-02 17:58:52.651592 | orchestrator | Monday 02 June 2025 17:57:18 +0000 (0:00:03.339) 0:00:29.498 *********** 2025-06-02 17:58:52.651599 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.651606 | orchestrator | 2025-06-02 17:58:52.651613 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-02 17:58:52.651619 | orchestrator | Monday 02 June 2025 17:57:23 +0000 (0:00:04.108) 0:00:33.606 *********** 2025-06-02 17:58:52.651626 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.651632 | orchestrator | 2025-06-02 17:58:52.651639 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-02 17:58:52.651650 | orchestrator | Monday 02 June 2025 17:57:26 +0000 (0:00:03.797) 0:00:37.403 *********** 2025-06-02 17:58:52.651659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651728 | orchestrator | 2025-06-02 17:58:52.651735 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-02 17:58:52.651742 | orchestrator | Monday 02 June 2025 17:57:28 +0000 (0:00:01.395) 0:00:38.799 *********** 2025-06-02 17:58:52.651749 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.651755 | orchestrator | 2025-06-02 17:58:52.651762 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-02 17:58:52.651769 | orchestrator | Monday 02 June 2025 17:57:28 +0000 (0:00:00.133) 0:00:38.933 *********** 2025-06-02 17:58:52.651775 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.651782 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:52.651788 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:52.651795 | orchestrator | 2025-06-02 17:58:52.651802 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-02 17:58:52.651808 | orchestrator | Monday 02 June 2025 17:57:28 +0000 (0:00:00.532) 0:00:39.465 *********** 2025-06-02 17:58:52.651815 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:58:52.651821 | orchestrator | 2025-06-02 17:58:52.651828 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-02 17:58:52.651835 | orchestrator | Monday 02 June 2025 17:57:29 +0000 (0:00:00.993) 0:00:40.459 *********** 2025-06-02 17:58:52.651842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.651880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.651901 | orchestrator | 2025-06-02 17:58:52.651907 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-02 17:58:52.651914 | orchestrator | Monday 02 June 2025 17:57:32 +0000 (0:00:02.528) 0:00:42.987 *********** 2025-06-02 17:58:52.651921 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:58:52.651927 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:58:52.651934 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:58:52.651941 | orchestrator | 2025-06-02 17:58:52.651947 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:58:52.651958 | orchestrator | Monday 02 June 2025 17:57:32 +0000 (0:00:00.286) 0:00:43.274 *********** 2025-06-02 17:58:52.651966 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:58:52.651973 | orchestrator | 2025-06-02 17:58:52.651979 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-02 17:58:52.651990 | orchestrator | Monday 02 June 2025 17:57:33 +0000 (0:00:00.703) 0:00:43.977 *********** 2025-06-02 17:58:52.651997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652036 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652055 | orchestrator | 2025-06-02 17:58:52.652062 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-02 17:58:52.652072 | orchestrator | Monday 02 June 2025 17:57:35 +0000 (0:00:02.357) 0:00:46.335 *********** 2025-06-02 17:58:52.652080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652094 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.652101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652133 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:52.652144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652158 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:52.652164 | orchestrator | 2025-06-02 17:58:52.652171 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-02 17:58:52.652178 | orchestrator | Monday 02 June 2025 17:57:36 +0000 (0:00:00.606) 0:00:46.942 *********** 2025-06-02 17:58:52.652185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652233 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.652248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652267 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:52.652274 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:52.652294 | orchestrator | 2025-06-02 17:58:52.652301 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-02 17:58:52.652312 | orchestrator | Monday 02 June 2025 17:57:37 +0000 (0:00:01.238) 0:00:48.181 *********** 2025-06-02 17:58:52.652324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:2025-06-02 17:58:52 | INFO  | Task f4ce4db2-2f33-4129-b71c-e8dc672a251b is in state SUCCESS 2025-06-02 17:58:52.652522 | orchestrator | 9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652588 | orchestrator | 2025-06-02 17:58:52.652595 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-02 17:58:52.652602 | orchestrator | Monday 02 June 2025 17:57:39 +0000 (0:00:02.342) 0:00:50.523 *********** 2025-06-02 17:58:52.652612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652672 | orchestrator | 2025-06-02 17:58:52.652679 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-02 17:58:52.652686 | orchestrator | Monday 02 June 2025 17:57:46 +0000 (0:00:06.144) 0:00:56.668 *********** 2025-06-02 17:58:52.652693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652711 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.652718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652737 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:52.652747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-02 17:58:52.652755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:58:52.652762 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:52.652769 | orchestrator | 2025-06-02 17:58:52.652776 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-02 17:58:52.652787 | orchestrator | Monday 02 June 2025 17:57:47 +0000 (0:00:01.386) 0:00:58.054 *********** 2025-06-02 17:58:52.652794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-02 17:58:52.652822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:58:52.652848 | orchestrator | 2025-06-02 17:58:52.652855 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-02 17:58:52.652861 | orchestrator | Monday 02 June 2025 17:57:50 +0000 (0:00:02.926) 0:01:00.981 *********** 2025-06-02 17:58:52.652868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:58:52.652875 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:58:52.652882 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:58:52.652888 | orchestrator | 2025-06-02 17:58:52.652895 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-02 17:58:52.652902 | orchestrator | Monday 02 June 2025 17:57:50 +0000 (0:00:00.364) 0:01:01.346 *********** 2025-06-02 17:58:52.652909 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.652915 | orchestrator | 2025-06-02 17:58:52.652922 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-02 17:58:52.652929 | orchestrator | Monday 02 June 2025 17:57:53 +0000 (0:00:02.286) 0:01:03.633 *********** 2025-06-02 17:58:52.652936 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.652942 | orchestrator | 2025-06-02 17:58:52.652949 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-02 17:58:52.652956 | orchestrator | Monday 02 June 2025 17:57:55 +0000 (0:00:02.306) 0:01:05.939 *********** 2025-06-02 17:58:52.652966 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.652973 | orchestrator | 2025-06-02 17:58:52.652979 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:58:52.652986 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:18.093) 0:01:24.033 *********** 2025-06-02 17:58:52.652992 | orchestrator | 2025-06-02 17:58:52.653000 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:58:52.653006 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:00.068) 0:01:24.101 *********** 2025-06-02 17:58:52.653013 | orchestrator | 2025-06-02 17:58:52.653020 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-02 17:58:52.653026 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:00.073) 0:01:24.175 *********** 2025-06-02 17:58:52.653033 | orchestrator | 2025-06-02 17:58:52.653040 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-02 17:58:52.653046 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:00.066) 0:01:24.242 *********** 2025-06-02 17:58:52.653053 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.653060 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:58:52.653066 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:58:52.653073 | orchestrator | 2025-06-02 17:58:52.653080 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-02 17:58:52.653087 | orchestrator | Monday 02 June 2025 17:58:37 +0000 (0:00:23.371) 0:01:47.613 *********** 2025-06-02 17:58:52.653093 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:58:52.653100 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:58:52.653118 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:58:52.653125 | orchestrator | 2025-06-02 17:58:52.653132 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:58:52.653143 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-02 17:58:52.653151 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:58:52.653159 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 17:58:52.653167 | orchestrator | 2025-06-02 17:58:52.653175 | orchestrator | 2025-06-02 17:58:52.653182 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:58:52.653190 | orchestrator | Monday 02 June 2025 17:58:52 +0000 (0:00:15.226) 0:02:02.839 *********** 2025-06-02 17:58:52.653198 | orchestrator | =============================================================================== 2025-06-02 17:58:52.653228 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 23.37s 2025-06-02 17:58:52.653236 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 18.09s 2025-06-02 17:58:52.653244 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.23s 2025-06-02 17:58:52.653251 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.38s 2025-06-02 17:58:52.653259 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 6.15s 2025-06-02 17:58:52.653267 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.26s 2025-06-02 17:58:52.653274 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.11s 2025-06-02 17:58:52.653282 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.80s 2025-06-02 17:58:52.653290 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.67s 2025-06-02 17:58:52.653298 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.53s 2025-06-02 17:58:52.653305 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.50s 2025-06-02 17:58:52.653313 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.49s 2025-06-02 17:58:52.653321 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.34s 2025-06-02 17:58:52.653329 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.93s 2025-06-02 17:58:52.653337 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.53s 2025-06-02 17:58:52.653344 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.36s 2025-06-02 17:58:52.653352 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.34s 2025-06-02 17:58:52.653359 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.31s 2025-06-02 17:58:52.653367 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.29s 2025-06-02 17:58:52.653375 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.40s 2025-06-02 17:58:52.653383 | orchestrator | 2025-06-02 17:58:52 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:52.653391 | orchestrator | 2025-06-02 17:58:52 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:52.653398 | orchestrator | 2025-06-02 17:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:55.698002 | orchestrator | 2025-06-02 17:58:55 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:55.700183 | orchestrator | 2025-06-02 17:58:55 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:55.702669 | orchestrator | 2025-06-02 17:58:55 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:55.702769 | orchestrator | 2025-06-02 17:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:58:58.747050 | orchestrator | 2025-06-02 17:58:58 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:58:58.749076 | orchestrator | 2025-06-02 17:58:58 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:58:58.752030 | orchestrator | 2025-06-02 17:58:58 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:58:58.752092 | orchestrator | 2025-06-02 17:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:01.795122 | orchestrator | 2025-06-02 17:59:01 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:01.797825 | orchestrator | 2025-06-02 17:59:01 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:01.800161 | orchestrator | 2025-06-02 17:59:01 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:01.800252 | orchestrator | 2025-06-02 17:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:04.841024 | orchestrator | 2025-06-02 17:59:04 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:04.842995 | orchestrator | 2025-06-02 17:59:04 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:04.844727 | orchestrator | 2025-06-02 17:59:04 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:04.844767 | orchestrator | 2025-06-02 17:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:07.888959 | orchestrator | 2025-06-02 17:59:07 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:07.889032 | orchestrator | 2025-06-02 17:59:07 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:07.889038 | orchestrator | 2025-06-02 17:59:07 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:07.889043 | orchestrator | 2025-06-02 17:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:10.937512 | orchestrator | 2025-06-02 17:59:10 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:10.939165 | orchestrator | 2025-06-02 17:59:10 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:10.940386 | orchestrator | 2025-06-02 17:59:10 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:10.940415 | orchestrator | 2025-06-02 17:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:13.993078 | orchestrator | 2025-06-02 17:59:13 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:13.995997 | orchestrator | 2025-06-02 17:59:13 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:13.998608 | orchestrator | 2025-06-02 17:59:13 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:13.998655 | orchestrator | 2025-06-02 17:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:17.042318 | orchestrator | 2025-06-02 17:59:17 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:17.044364 | orchestrator | 2025-06-02 17:59:17 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:17.048130 | orchestrator | 2025-06-02 17:59:17 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:17.048249 | orchestrator | 2025-06-02 17:59:17 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:20.099081 | orchestrator | 2025-06-02 17:59:20 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:20.103365 | orchestrator | 2025-06-02 17:59:20 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:20.108999 | orchestrator | 2025-06-02 17:59:20 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state STARTED 2025-06-02 17:59:20.109085 | orchestrator | 2025-06-02 17:59:20 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:23.166004 | orchestrator | 2025-06-02 17:59:23 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:23.166140 | orchestrator | 2025-06-02 17:59:23 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:23.173452 | orchestrator | 2025-06-02 17:59:23.173520 | orchestrator | 2025-06-02 17:59:23.173527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:59:23.173533 | orchestrator | 2025-06-02 17:59:23.173538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:59:23.173544 | orchestrator | Monday 02 June 2025 17:57:02 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-02 17:59:23.173548 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:23.173554 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:59:23.173559 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:59:23.173564 | orchestrator | 2025-06-02 17:59:23.173568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:59:23.173574 | orchestrator | Monday 02 June 2025 17:57:02 +0000 (0:00:00.328) 0:00:00.581 *********** 2025-06-02 17:59:23.173578 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-02 17:59:23.173583 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-02 17:59:23.173588 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-02 17:59:23.173592 | orchestrator | 2025-06-02 17:59:23.173597 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-02 17:59:23.173601 | orchestrator | 2025-06-02 17:59:23.173606 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 17:59:23.173611 | orchestrator | Monday 02 June 2025 17:57:03 +0000 (0:00:00.424) 0:00:01.005 *********** 2025-06-02 17:59:23.173615 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:23.173621 | orchestrator | 2025-06-02 17:59:23.173625 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-02 17:59:23.173640 | orchestrator | Monday 02 June 2025 17:57:03 +0000 (0:00:00.528) 0:00:01.533 *********** 2025-06-02 17:59:23.173647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173755 | orchestrator | 2025-06-02 17:59:23.173760 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-02 17:59:23.173764 | orchestrator | Monday 02 June 2025 17:57:04 +0000 (0:00:00.905) 0:00:02.438 *********** 2025-06-02 17:59:23.173768 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-02 17:59:23.173774 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-02 17:59:23.173779 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:59:23.173783 | orchestrator | 2025-06-02 17:59:23.173788 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-02 17:59:23.173792 | orchestrator | Monday 02 June 2025 17:57:06 +0000 (0:00:01.363) 0:00:03.802 *********** 2025-06-02 17:59:23.173796 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:23.173801 | orchestrator | 2025-06-02 17:59:23.173805 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-02 17:59:23.173810 | orchestrator | Monday 02 June 2025 17:57:06 +0000 (0:00:00.725) 0:00:04.527 *********** 2025-06-02 17:59:23.173827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173850 | orchestrator | 2025-06-02 17:59:23.173854 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-02 17:59:23.173858 | orchestrator | Monday 02 June 2025 17:57:08 +0000 (0:00:01.685) 0:00:06.212 *********** 2025-06-02 17:59:23.173863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173868 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.173872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173877 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.173884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173889 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.173923 | orchestrator | 2025-06-02 17:59:23.173928 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-02 17:59:23.173932 | orchestrator | Monday 02 June 2025 17:57:08 +0000 (0:00:00.452) 0:00:06.665 *********** 2025-06-02 17:59:23.173937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173942 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.173950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173958 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.173963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-02 17:59:23.173967 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.173972 | orchestrator | 2025-06-02 17:59:23.173976 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-02 17:59:23.173980 | orchestrator | Monday 02 June 2025 17:57:09 +0000 (0:00:01.021) 0:00:07.687 *********** 2025-06-02 17:59:23.173985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.173990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.174000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.174005 | orchestrator | 2025-06-02 17:59:23.174009 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-02 17:59:23.174069 | orchestrator | Monday 02 June 2025 17:57:11 +0000 (0:00:01.391) 0:00:09.079 *********** 2025-06-02 17:59:23.174109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.174144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.174149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.174154 | orchestrator | 2025-06-02 17:59:23.174158 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-02 17:59:23.174163 | orchestrator | Monday 02 June 2025 17:57:12 +0000 (0:00:01.315) 0:00:10.395 *********** 2025-06-02 17:59:23.174168 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.174343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.174348 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.174353 | orchestrator | 2025-06-02 17:59:23.174357 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-02 17:59:23.174361 | orchestrator | Monday 02 June 2025 17:57:13 +0000 (0:00:00.539) 0:00:10.935 *********** 2025-06-02 17:59:23.174366 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:59:23.174371 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:59:23.174375 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-02 17:59:23.174379 | orchestrator | 2025-06-02 17:59:23.174384 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-02 17:59:23.174388 | orchestrator | Monday 02 June 2025 17:57:14 +0000 (0:00:01.146) 0:00:12.082 *********** 2025-06-02 17:59:23.174392 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:59:23.174397 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:59:23.174402 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-02 17:59:23.174406 | orchestrator | 2025-06-02 17:59:23.174414 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-02 17:59:23.174420 | orchestrator | Monday 02 June 2025 17:57:15 +0000 (0:00:01.205) 0:00:13.287 *********** 2025-06-02 17:59:23.174433 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:59:23.174444 | orchestrator | 2025-06-02 17:59:23.174452 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-02 17:59:23.174460 | orchestrator | Monday 02 June 2025 17:57:16 +0000 (0:00:00.735) 0:00:14.022 *********** 2025-06-02 17:59:23.174473 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-02 17:59:23.174480 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-02 17:59:23.174487 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:23.174494 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:59:23.174501 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:59:23.174507 | orchestrator | 2025-06-02 17:59:23.174514 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-02 17:59:23.174520 | orchestrator | Monday 02 June 2025 17:57:16 +0000 (0:00:00.703) 0:00:14.726 *********** 2025-06-02 17:59:23.174526 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.174532 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.174539 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.174545 | orchestrator | 2025-06-02 17:59:23.174569 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-02 17:59:23.174576 | orchestrator | Monday 02 June 2025 17:57:17 +0000 (0:00:00.533) 0:00:15.259 *********** 2025-06-02 17:59:23.174590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1107888, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.924738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1107888, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.924738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1107888, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.924738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1107883, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1107883, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1107883, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1107880, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.919738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1107880, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.919738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1107880, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.919738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1107886, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1107886, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1107886, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1107872, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.914738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1107872, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.914738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174731 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1107872, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.914738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1107881, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1107881, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1107881, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1107885, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1107885, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1107885, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107870, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107870, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1107870, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107863, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9067378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107863, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9067378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1107863, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9067378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174860 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1107875, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1107875, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1107875, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9157379, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107865, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9097378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107865, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9097378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1107865, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9097378, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1107884, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.174922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1107884, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1107884, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.921738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1107878, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1107878, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1107878, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1107887, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1107887, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1107887, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.922738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107868, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107868, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1107868, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.913738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1107882, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1107882, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1107882, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.920738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1107864, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.908738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1107864, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.908738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1107864, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.908738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107866, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.911738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175404 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107866, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.911738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1107866, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.911738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1107879, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1107879, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1107879, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.918738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1107910, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9527383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1107910, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9527383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1107910, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9527383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1107901, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1107901, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1107901, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107891, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.926738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175484 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107891, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.926738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1107891, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.926738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1107922, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9597383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1107922, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9597383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1107922, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9597383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107894, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.927738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107894, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.927738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1107894, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.927738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1107919, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1107919, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1107919, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175553 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1107923, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9627383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1107923, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9627383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1107923, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9627383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1107914, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9537382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1107914, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9537382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1107914, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9537382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1107917, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9577382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1107917, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9577382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1107917, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9577382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107896, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.928738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107896, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.928738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1107896, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.928738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1107902, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1107902, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1107902, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9407382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1107927, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9647384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1107927, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9647384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1107927, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9647384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1107921, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1107921, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1107921, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9587383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107898, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9317381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107898, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9317381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1107898, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9317381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107897, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.929738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107897, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.929738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1107897, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.929738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1107899, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.932738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1107899, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.932738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1107900, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.939738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1107899, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.932738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1107900, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.939738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1107903, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.942738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1107900, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.939738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1107903, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.942738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1107916, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9547381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175785 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1107903, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.942738, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1107916, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9547381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1107908, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9437382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1107916, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9547381, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175810 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1107908, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9437382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1107929, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9667382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1107908, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9437382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1107929, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9667382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1107929, 'dev': 113, 'nlink': 1, 'atime': 1748870561.0, 'mtime': 1748870561.0, 'ctime': 1748884080.9667382, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-02 17:59:23.175843 | orchestrator | 2025-06-02 17:59:23.175848 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-02 17:59:23.175853 | orchestrator | Monday 02 June 2025 17:57:55 +0000 (0:00:38.284) 0:00:53.544 *********** 2025-06-02 17:59:23.175858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.175862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.175867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-02 17:59:23.175872 | orchestrator | 2025-06-02 17:59:23.175876 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-02 17:59:23.175881 | orchestrator | Monday 02 June 2025 17:57:56 +0000 (0:00:01.015) 0:00:54.559 *********** 2025-06-02 17:59:23.175886 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:23.175891 | orchestrator | 2025-06-02 17:59:23.175896 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-02 17:59:23.175901 | orchestrator | Monday 02 June 2025 17:57:59 +0000 (0:00:02.280) 0:00:56.839 *********** 2025-06-02 17:59:23.175907 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:23.175912 | orchestrator | 2025-06-02 17:59:23.175917 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:59:23.175922 | orchestrator | Monday 02 June 2025 17:58:01 +0000 (0:00:02.338) 0:00:59.178 *********** 2025-06-02 17:59:23.175927 | orchestrator | 2025-06-02 17:59:23.175932 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:59:23.175943 | orchestrator | Monday 02 June 2025 17:58:01 +0000 (0:00:00.249) 0:00:59.427 *********** 2025-06-02 17:59:23.175949 | orchestrator | 2025-06-02 17:59:23.175954 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-02 17:59:23.175959 | orchestrator | Monday 02 June 2025 17:58:01 +0000 (0:00:00.063) 0:00:59.491 *********** 2025-06-02 17:59:23.175965 | orchestrator | 2025-06-02 17:59:23.175973 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-02 17:59:23.175980 | orchestrator | Monday 02 June 2025 17:58:01 +0000 (0:00:00.067) 0:00:59.558 *********** 2025-06-02 17:59:23.175990 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.176000 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.176008 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:23.176015 | orchestrator | 2025-06-02 17:59:23.176022 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-02 17:59:23.176030 | orchestrator | Monday 02 June 2025 17:58:03 +0000 (0:00:02.042) 0:01:01.601 *********** 2025-06-02 17:59:23.176038 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.176047 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.176055 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-02 17:59:23.176063 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-02 17:59:23.176072 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-02 17:59:23.176080 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:23.176088 | orchestrator | 2025-06-02 17:59:23.176096 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-02 17:59:23.176109 | orchestrator | Monday 02 June 2025 17:58:42 +0000 (0:00:38.768) 0:01:40.370 *********** 2025-06-02 17:59:23.176117 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.176125 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:23.176133 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:23.176141 | orchestrator | 2025-06-02 17:59:23.176150 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-02 17:59:23.176159 | orchestrator | Monday 02 June 2025 17:59:15 +0000 (0:00:32.910) 0:02:13.280 *********** 2025-06-02 17:59:23.176166 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:23.176171 | orchestrator | 2025-06-02 17:59:23.176177 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-02 17:59:23.176201 | orchestrator | Monday 02 June 2025 17:59:17 +0000 (0:00:02.467) 0:02:15.748 *********** 2025-06-02 17:59:23.176207 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.176212 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:23.176217 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:23.176223 | orchestrator | 2025-06-02 17:59:23.176228 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-02 17:59:23.176233 | orchestrator | Monday 02 June 2025 17:59:18 +0000 (0:00:00.306) 0:02:16.055 *********** 2025-06-02 17:59:23.176240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-02 17:59:23.176247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-02 17:59:23.176253 | orchestrator | 2025-06-02 17:59:23.176258 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-02 17:59:23.176263 | orchestrator | Monday 02 June 2025 17:59:20 +0000 (0:00:02.629) 0:02:18.684 *********** 2025-06-02 17:59:23.176273 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:23.176277 | orchestrator | 2025-06-02 17:59:23.176282 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:59:23.176288 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:59:23.176294 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:59:23.176299 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 17:59:23.176304 | orchestrator | 2025-06-02 17:59:23.176308 | orchestrator | 2025-06-02 17:59:23.176313 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:59:23.176317 | orchestrator | Monday 02 June 2025 17:59:21 +0000 (0:00:00.312) 0:02:18.997 *********** 2025-06-02 17:59:23.176322 | orchestrator | =============================================================================== 2025-06-02 17:59:23.176326 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.77s 2025-06-02 17:59:23.176331 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.28s 2025-06-02 17:59:23.176335 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 32.91s 2025-06-02 17:59:23.176340 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.63s 2025-06-02 17:59:23.176347 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.47s 2025-06-02 17:59:23.176360 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.34s 2025-06-02 17:59:23.176370 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.28s 2025-06-02 17:59:23.176378 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.04s 2025-06-02 17:59:23.176385 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.69s 2025-06-02 17:59:23.176392 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.39s 2025-06-02 17:59:23.176399 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.36s 2025-06-02 17:59:23.176406 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.32s 2025-06-02 17:59:23.176412 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.21s 2025-06-02 17:59:23.176419 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.15s 2025-06-02 17:59:23.176425 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.02s 2025-06-02 17:59:23.176432 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.02s 2025-06-02 17:59:23.176439 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.91s 2025-06-02 17:59:23.176445 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.74s 2025-06-02 17:59:23.176452 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.73s 2025-06-02 17:59:23.176458 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.70s 2025-06-02 17:59:23.176469 | orchestrator | 2025-06-02 17:59:23 | INFO  | Task 4448aa2b-f0a3-4d91-bb76-e4bf0e32e957 is in state SUCCESS 2025-06-02 17:59:23.176476 | orchestrator | 2025-06-02 17:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:26.231026 | orchestrator | 2025-06-02 17:59:26 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:26.234383 | orchestrator | 2025-06-02 17:59:26 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:26.238199 | orchestrator | 2025-06-02 17:59:26 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state STARTED 2025-06-02 17:59:26.238277 | orchestrator | 2025-06-02 17:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:29.297088 | orchestrator | 2025-06-02 17:59:29 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:29.297357 | orchestrator | 2025-06-02 17:59:29 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:29.297376 | orchestrator | 2025-06-02 17:59:29 | INFO  | Task 6ea3090d-6474-4bff-b9b1-d56f8f0e1088 is in state SUCCESS 2025-06-02 17:59:29.298664 | orchestrator | 2025-06-02 17:59:29.298698 | orchestrator | 2025-06-02 17:59:29.298710 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 17:59:29.298723 | orchestrator | 2025-06-02 17:59:29.298737 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-02 17:59:29.298755 | orchestrator | Monday 02 June 2025 17:49:48 +0000 (0:00:00.394) 0:00:00.394 *********** 2025-06-02 17:59:29.298773 | orchestrator | changed: [testbed-manager] 2025-06-02 17:59:29.298792 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.298809 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.298826 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.298846 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.298864 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.298882 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.298894 | orchestrator | 2025-06-02 17:59:29.298905 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 17:59:29.298916 | orchestrator | Monday 02 June 2025 17:49:49 +0000 (0:00:01.024) 0:00:01.419 *********** 2025-06-02 17:59:29.298927 | orchestrator | changed: [testbed-manager] 2025-06-02 17:59:29.298937 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.298948 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.298959 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.298969 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.298980 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.298990 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.299001 | orchestrator | 2025-06-02 17:59:29.299012 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 17:59:29.299023 | orchestrator | Monday 02 June 2025 17:49:50 +0000 (0:00:00.735) 0:00:02.154 *********** 2025-06-02 17:59:29.299034 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-02 17:59:29.299045 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-02 17:59:29.299055 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-02 17:59:29.299066 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-02 17:59:29.299076 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-02 17:59:29.299087 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-02 17:59:29.299097 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-02 17:59:29.299108 | orchestrator | 2025-06-02 17:59:29.299119 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-02 17:59:29.299129 | orchestrator | 2025-06-02 17:59:29.299140 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 17:59:29.299151 | orchestrator | Monday 02 June 2025 17:49:51 +0000 (0:00:00.928) 0:00:03.083 *********** 2025-06-02 17:59:29.299161 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.299172 | orchestrator | 2025-06-02 17:59:29.299254 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-02 17:59:29.299268 | orchestrator | Monday 02 June 2025 17:49:51 +0000 (0:00:00.793) 0:00:03.876 *********** 2025-06-02 17:59:29.299281 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-02 17:59:29.299295 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-02 17:59:29.299307 | orchestrator | 2025-06-02 17:59:29.299320 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-02 17:59:29.299358 | orchestrator | Monday 02 June 2025 17:49:55 +0000 (0:00:03.792) 0:00:07.668 *********** 2025-06-02 17:59:29.299371 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:59:29.299384 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-02 17:59:29.299397 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.299409 | orchestrator | 2025-06-02 17:59:29.299422 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 17:59:29.299439 | orchestrator | Monday 02 June 2025 17:49:59 +0000 (0:00:03.920) 0:00:11.589 *********** 2025-06-02 17:59:29.299457 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.299522 | orchestrator | 2025-06-02 17:59:29.299542 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-02 17:59:29.300067 | orchestrator | Monday 02 June 2025 17:50:00 +0000 (0:00:00.649) 0:00:12.238 *********** 2025-06-02 17:59:29.300080 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.300091 | orchestrator | 2025-06-02 17:59:29.300102 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-02 17:59:29.300113 | orchestrator | Monday 02 June 2025 17:50:01 +0000 (0:00:01.466) 0:00:13.705 *********** 2025-06-02 17:59:29.300124 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.300134 | orchestrator | 2025-06-02 17:59:29.300160 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:59:29.300172 | orchestrator | Monday 02 June 2025 17:50:04 +0000 (0:00:03.261) 0:00:16.966 *********** 2025-06-02 17:59:29.300230 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.300242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.300323 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.300334 | orchestrator | 2025-06-02 17:59:29.300345 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 17:59:29.300357 | orchestrator | Monday 02 June 2025 17:50:05 +0000 (0:00:00.631) 0:00:17.598 *********** 2025-06-02 17:59:29.300367 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.300378 | orchestrator | 2025-06-02 17:59:29.300389 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-02 17:59:29.300400 | orchestrator | Monday 02 June 2025 17:50:36 +0000 (0:00:30.846) 0:00:48.445 *********** 2025-06-02 17:59:29.300411 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.300422 | orchestrator | 2025-06-02 17:59:29.300433 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:59:29.300443 | orchestrator | Monday 02 June 2025 17:50:51 +0000 (0:00:15.107) 0:01:03.553 *********** 2025-06-02 17:59:29.300454 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.300465 | orchestrator | 2025-06-02 17:59:29.300476 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:59:29.300487 | orchestrator | Monday 02 June 2025 17:51:04 +0000 (0:00:12.612) 0:01:16.165 *********** 2025-06-02 17:59:29.300906 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.300925 | orchestrator | 2025-06-02 17:59:29.300936 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-02 17:59:29.300947 | orchestrator | Monday 02 June 2025 17:51:05 +0000 (0:00:01.806) 0:01:17.972 *********** 2025-06-02 17:59:29.300958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.300968 | orchestrator | 2025-06-02 17:59:29.300979 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:59:29.300990 | orchestrator | Monday 02 June 2025 17:51:06 +0000 (0:00:00.605) 0:01:18.578 *********** 2025-06-02 17:59:29.301001 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.301012 | orchestrator | 2025-06-02 17:59:29.301083 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-02 17:59:29.301096 | orchestrator | Monday 02 June 2025 17:51:07 +0000 (0:00:00.671) 0:01:19.250 *********** 2025-06-02 17:59:29.301106 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.301117 | orchestrator | 2025-06-02 17:59:29.301143 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 17:59:29.301154 | orchestrator | Monday 02 June 2025 17:51:26 +0000 (0:00:19.142) 0:01:38.393 *********** 2025-06-02 17:59:29.301165 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.301176 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.301210 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.301221 | orchestrator | 2025-06-02 17:59:29.301232 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-02 17:59:29.301242 | orchestrator | 2025-06-02 17:59:29.301253 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-02 17:59:29.301264 | orchestrator | Monday 02 June 2025 17:51:26 +0000 (0:00:00.312) 0:01:38.705 *********** 2025-06-02 17:59:29.301274 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.301285 | orchestrator | 2025-06-02 17:59:29.301296 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-02 17:59:29.301306 | orchestrator | Monday 02 June 2025 17:51:27 +0000 (0:00:00.581) 0:01:39.286 *********** 2025-06-02 17:59:29.301317 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.301328 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.301338 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.301349 | orchestrator | 2025-06-02 17:59:29.301360 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-02 17:59:29.301370 | orchestrator | Monday 02 June 2025 17:51:29 +0000 (0:00:02.018) 0:01:41.305 *********** 2025-06-02 17:59:29.301381 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.301392 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.301402 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.301413 | orchestrator | 2025-06-02 17:59:29.301423 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 17:59:29.301440 | orchestrator | Monday 02 June 2025 17:51:31 +0000 (0:00:02.109) 0:01:43.414 *********** 2025-06-02 17:59:29.301458 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.301477 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.301496 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.301514 | orchestrator | 2025-06-02 17:59:29.301532 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 17:59:29.301551 | orchestrator | Monday 02 June 2025 17:51:31 +0000 (0:00:00.451) 0:01:43.866 *********** 2025-06-02 17:59:29.301570 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:59:29.301589 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302256 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:59:29.302287 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.302299 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-02 17:59:29.302525 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-02 17:59:29.302545 | orchestrator | 2025-06-02 17:59:29.302557 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-02 17:59:29.302567 | orchestrator | Monday 02 June 2025 17:51:41 +0000 (0:00:09.657) 0:01:53.523 *********** 2025-06-02 17:59:29.302578 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.302589 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302600 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.302655 | orchestrator | 2025-06-02 17:59:29.302669 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-02 17:59:29.302681 | orchestrator | Monday 02 June 2025 17:51:42 +0000 (0:00:00.681) 0:01:54.206 *********** 2025-06-02 17:59:29.302691 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-02 17:59:29.302702 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302723 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-02 17:59:29.302734 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.302745 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-02 17:59:29.302768 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.302779 | orchestrator | 2025-06-02 17:59:29.302790 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 17:59:29.302801 | orchestrator | Monday 02 June 2025 17:51:44 +0000 (0:00:01.926) 0:01:56.132 *********** 2025-06-02 17:59:29.302812 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.302822 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302833 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.302844 | orchestrator | 2025-06-02 17:59:29.302854 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-02 17:59:29.302866 | orchestrator | Monday 02 June 2025 17:51:45 +0000 (0:00:01.003) 0:01:57.136 *********** 2025-06-02 17:59:29.302876 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302887 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.302898 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.302908 | orchestrator | 2025-06-02 17:59:29.302919 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-02 17:59:29.302936 | orchestrator | Monday 02 June 2025 17:51:46 +0000 (0:00:01.216) 0:01:58.352 *********** 2025-06-02 17:59:29.302956 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.302973 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303109 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.303127 | orchestrator | 2025-06-02 17:59:29.303138 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-02 17:59:29.303149 | orchestrator | Monday 02 June 2025 17:51:50 +0000 (0:00:03.842) 0:02:02.195 *********** 2025-06-02 17:59:29.303160 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303171 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303223 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.303235 | orchestrator | 2025-06-02 17:59:29.303246 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:59:29.303257 | orchestrator | Monday 02 June 2025 17:52:11 +0000 (0:00:21.493) 0:02:23.688 *********** 2025-06-02 17:59:29.303268 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303279 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303289 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.303300 | orchestrator | 2025-06-02 17:59:29.303311 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:59:29.303322 | orchestrator | Monday 02 June 2025 17:52:24 +0000 (0:00:12.596) 0:02:36.285 *********** 2025-06-02 17:59:29.303332 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.303343 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303354 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303364 | orchestrator | 2025-06-02 17:59:29.303375 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-02 17:59:29.303385 | orchestrator | Monday 02 June 2025 17:52:25 +0000 (0:00:00.821) 0:02:37.107 *********** 2025-06-02 17:59:29.303396 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303407 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303417 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.303428 | orchestrator | 2025-06-02 17:59:29.303439 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-02 17:59:29.303449 | orchestrator | Monday 02 June 2025 17:52:36 +0000 (0:00:11.385) 0:02:48.493 *********** 2025-06-02 17:59:29.303460 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.303471 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303481 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303492 | orchestrator | 2025-06-02 17:59:29.303503 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-02 17:59:29.303513 | orchestrator | Monday 02 June 2025 17:52:37 +0000 (0:00:01.580) 0:02:50.073 *********** 2025-06-02 17:59:29.303524 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.303535 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.303545 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.303566 | orchestrator | 2025-06-02 17:59:29.303577 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-02 17:59:29.303588 | orchestrator | 2025-06-02 17:59:29.303598 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:59:29.303609 | orchestrator | Monday 02 June 2025 17:52:38 +0000 (0:00:00.351) 0:02:50.425 *********** 2025-06-02 17:59:29.303620 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.303632 | orchestrator | 2025-06-02 17:59:29.303643 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-02 17:59:29.303654 | orchestrator | Monday 02 June 2025 17:52:38 +0000 (0:00:00.552) 0:02:50.978 *********** 2025-06-02 17:59:29.303665 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-02 17:59:29.303676 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-02 17:59:29.303687 | orchestrator | 2025-06-02 17:59:29.303697 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-02 17:59:29.303710 | orchestrator | Monday 02 June 2025 17:52:42 +0000 (0:00:03.272) 0:02:54.251 *********** 2025-06-02 17:59:29.303723 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-02 17:59:29.303737 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-02 17:59:29.303750 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-02 17:59:29.303763 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-02 17:59:29.303776 | orchestrator | 2025-06-02 17:59:29.303788 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-02 17:59:29.303807 | orchestrator | Monday 02 June 2025 17:52:49 +0000 (0:00:07.153) 0:03:01.404 *********** 2025-06-02 17:59:29.303820 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 17:59:29.303833 | orchestrator | 2025-06-02 17:59:29.303846 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-02 17:59:29.303858 | orchestrator | Monday 02 June 2025 17:52:52 +0000 (0:00:03.139) 0:03:04.544 *********** 2025-06-02 17:59:29.303870 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 17:59:29.303882 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-02 17:59:29.303895 | orchestrator | 2025-06-02 17:59:29.303907 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-02 17:59:29.303920 | orchestrator | Monday 02 June 2025 17:52:56 +0000 (0:00:03.848) 0:03:08.392 *********** 2025-06-02 17:59:29.303932 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 17:59:29.303944 | orchestrator | 2025-06-02 17:59:29.303957 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-02 17:59:29.303970 | orchestrator | Monday 02 June 2025 17:52:59 +0000 (0:00:03.174) 0:03:11.566 *********** 2025-06-02 17:59:29.303983 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-02 17:59:29.303999 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-02 17:59:29.304018 | orchestrator | 2025-06-02 17:59:29.304036 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-02 17:59:29.304166 | orchestrator | Monday 02 June 2025 17:53:06 +0000 (0:00:07.479) 0:03:19.046 *********** 2025-06-02 17:59:29.304223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304265 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304382 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304414 | orchestrator | 2025-06-02 17:59:29.304425 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-02 17:59:29.304436 | orchestrator | Monday 02 June 2025 17:53:08 +0000 (0:00:01.668) 0:03:20.715 *********** 2025-06-02 17:59:29.304447 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.304458 | orchestrator | 2025-06-02 17:59:29.304469 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-02 17:59:29.304480 | orchestrator | Monday 02 June 2025 17:53:08 +0000 (0:00:00.109) 0:03:20.824 *********** 2025-06-02 17:59:29.304491 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.304501 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.304512 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.304523 | orchestrator | 2025-06-02 17:59:29.304534 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-02 17:59:29.304545 | orchestrator | Monday 02 June 2025 17:53:09 +0000 (0:00:00.485) 0:03:21.310 *********** 2025-06-02 17:59:29.304555 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-02 17:59:29.304566 | orchestrator | 2025-06-02 17:59:29.304577 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-02 17:59:29.304588 | orchestrator | Monday 02 June 2025 17:53:09 +0000 (0:00:00.679) 0:03:21.990 *********** 2025-06-02 17:59:29.304598 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.304609 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.304620 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.304631 | orchestrator | 2025-06-02 17:59:29.304641 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-02 17:59:29.304652 | orchestrator | Monday 02 June 2025 17:53:10 +0000 (0:00:00.281) 0:03:22.272 *********** 2025-06-02 17:59:29.304663 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.304701 | orchestrator | 2025-06-02 17:59:29.304713 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 17:59:29.304724 | orchestrator | Monday 02 June 2025 17:53:10 +0000 (0:00:00.633) 0:03:22.905 *********** 2025-06-02 17:59:29.304741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.304820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.304893 | orchestrator | 2025-06-02 17:59:29.304905 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 17:59:29.304916 | orchestrator | Monday 02 June 2025 17:53:13 +0000 (0:00:02.711) 0:03:25.617 *********** 2025-06-02 17:59:29.304928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.304941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.304953 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.304976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.304998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305012 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.305052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.305094 | orchestrator | 2025-06-02 17:59:29.305107 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 17:59:29.305119 | orchestrator | Monday 02 June 2025 17:53:14 +0000 (0:00:01.404) 0:03:27.021 *********** 2025-06-02 17:59:29.305133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305173 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.305239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/',2025-06-02 17:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:29.305255 | orchestrator | '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305281 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.305293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305330 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.305341 | orchestrator | 2025-06-02 17:59:29.305351 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-02 17:59:29.305363 | orchestrator | Monday 02 June 2025 17:53:15 +0000 (0:00:00.955) 0:03:27.977 *********** 2025-06-02 17:59:29.305399 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305514 | orchestrator | 2025-06-02 17:59:29.305525 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-02 17:59:29.305536 | orchestrator | Monday 02 June 2025 17:53:19 +0000 (0:00:03.285) 0:03:31.263 *********** 2025-06-02 17:59:29.305548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.305625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.305668 | orchestrator | 2025-06-02 17:59:29.305679 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-02 17:59:29.305690 | orchestrator | Monday 02 June 2025 17:53:28 +0000 (0:00:08.982) 0:03:40.245 *********** 2025-06-02 17:59:29.305706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305755 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.305767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305790 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.305811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-02 17:59:29.305823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.305834 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.305845 | orchestrator | 2025-06-02 17:59:29.305856 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-02 17:59:29.305866 | orchestrator | Monday 02 June 2025 17:53:28 +0000 (0:00:00.736) 0:03:40.982 *********** 2025-06-02 17:59:29.305902 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.305915 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.305926 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.305936 | orchestrator | 2025-06-02 17:59:29.305947 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-02 17:59:29.305958 | orchestrator | Monday 02 June 2025 17:53:31 +0000 (0:00:03.005) 0:03:43.987 *********** 2025-06-02 17:59:29.305969 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.305980 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.305991 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.306002 | orchestrator | 2025-06-02 17:59:29.306012 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-02 17:59:29.306058 | orchestrator | Monday 02 June 2025 17:53:32 +0000 (0:00:00.772) 0:03:44.760 *********** 2025-06-02 17:59:29.306070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.306091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.306141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-02 17:59:29.306156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.306168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.306240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.306261 | orchestrator | 2025-06-02 17:59:29.306273 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:59:29.306284 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:02.498) 0:03:47.259 *********** 2025-06-02 17:59:29.306295 | orchestrator | 2025-06-02 17:59:29.306305 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:59:29.306316 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:00.151) 0:03:47.410 *********** 2025-06-02 17:59:29.306327 | orchestrator | 2025-06-02 17:59:29.306337 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-02 17:59:29.306348 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:00.144) 0:03:47.554 *********** 2025-06-02 17:59:29.306359 | orchestrator | 2025-06-02 17:59:29.306370 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-02 17:59:29.306380 | orchestrator | Monday 02 June 2025 17:53:35 +0000 (0:00:00.517) 0:03:48.072 *********** 2025-06-02 17:59:29.306391 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.306402 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.306412 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.306423 | orchestrator | 2025-06-02 17:59:29.306434 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-02 17:59:29.306444 | orchestrator | Monday 02 June 2025 17:53:57 +0000 (0:00:21.983) 0:04:10.055 *********** 2025-06-02 17:59:29.306455 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.306466 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.306476 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.306487 | orchestrator | 2025-06-02 17:59:29.306498 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-02 17:59:29.306508 | orchestrator | 2025-06-02 17:59:29.306519 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:59:29.306536 | orchestrator | Monday 02 June 2025 17:54:10 +0000 (0:00:12.770) 0:04:22.826 *********** 2025-06-02 17:59:29.306547 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.306558 | orchestrator | 2025-06-02 17:59:29.306569 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:59:29.306579 | orchestrator | Monday 02 June 2025 17:54:13 +0000 (0:00:02.527) 0:04:25.354 *********** 2025-06-02 17:59:29.306590 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.306599 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.306609 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.306618 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.306627 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.306637 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.306647 | orchestrator | 2025-06-02 17:59:29.306656 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-02 17:59:29.306666 | orchestrator | Monday 02 June 2025 17:54:15 +0000 (0:00:01.958) 0:04:27.312 *********** 2025-06-02 17:59:29.306675 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.306685 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.306694 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.306704 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:59:29.306713 | orchestrator | 2025-06-02 17:59:29.306750 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-02 17:59:29.306767 | orchestrator | Monday 02 June 2025 17:54:17 +0000 (0:00:01.959) 0:04:29.271 *********** 2025-06-02 17:59:29.306777 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-02 17:59:29.306787 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-02 17:59:29.306797 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-02 17:59:29.306806 | orchestrator | 2025-06-02 17:59:29.306816 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-02 17:59:29.306825 | orchestrator | Monday 02 June 2025 17:54:18 +0000 (0:00:00.865) 0:04:30.137 *********** 2025-06-02 17:59:29.306835 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-02 17:59:29.306844 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-02 17:59:29.306854 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-02 17:59:29.306863 | orchestrator | 2025-06-02 17:59:29.306873 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-02 17:59:29.306882 | orchestrator | Monday 02 June 2025 17:54:19 +0000 (0:00:01.699) 0:04:31.836 *********** 2025-06-02 17:59:29.306892 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-02 17:59:29.306901 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.306911 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-02 17:59:29.306920 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.306930 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-02 17:59:29.306939 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.306948 | orchestrator | 2025-06-02 17:59:29.306958 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-02 17:59:29.306967 | orchestrator | Monday 02 June 2025 17:54:20 +0000 (0:00:01.184) 0:04:33.021 *********** 2025-06-02 17:59:29.306977 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:59:29.306987 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:59:29.306996 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.307006 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:59:29.307015 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:59:29.307025 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:59:29.307034 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:59:29.307044 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-02 17:59:29.307053 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.307063 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-02 17:59:29.307073 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-02 17:59:29.307082 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.307092 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:59:29.307101 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:59:29.307110 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-02 17:59:29.307120 | orchestrator | 2025-06-02 17:59:29.307129 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-02 17:59:29.307139 | orchestrator | Monday 02 June 2025 17:54:22 +0000 (0:00:01.331) 0:04:34.353 *********** 2025-06-02 17:59:29.307148 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.307158 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.307167 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.307177 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.307234 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.307243 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.307260 | orchestrator | 2025-06-02 17:59:29.307269 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-02 17:59:29.307279 | orchestrator | Monday 02 June 2025 17:54:24 +0000 (0:00:02.339) 0:04:36.693 *********** 2025-06-02 17:59:29.307289 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.307298 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.307308 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.307317 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.307327 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.307341 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.307351 | orchestrator | 2025-06-02 17:59:29.307359 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-02 17:59:29.307367 | orchestrator | Monday 02 June 2025 17:54:27 +0000 (0:00:02.643) 0:04:39.336 *********** 2025-06-02 17:59:29.307398 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307418 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307426 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307453 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307482 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307492 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307534 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307560 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307585 | orchestrator | 2025-06-02 17:59:29.307593 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:59:29.307601 | orchestrator | Monday 02 June 2025 17:54:31 +0000 (0:00:03.853) 0:04:43.189 *********** 2025-06-02 17:59:29.307609 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 17:59:29.307618 | orchestrator | 2025-06-02 17:59:29.307626 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-02 17:59:29.307634 | orchestrator | Monday 02 June 2025 17:54:33 +0000 (0:00:02.157) 0:04:45.347 *********** 2025-06-02 17:59:29.307648 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307660 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307687 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307696 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307704 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307746 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307774 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307805 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.307836 | orchestrator | 2025-06-02 17:59:29.307844 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-02 17:59:29.307852 | orchestrator | Monday 02 June 2025 17:54:38 +0000 (0:00:04.853) 0:04:50.201 *********** 2025-06-02 17:59:29.307887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.307904 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.307923 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.307935 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.307947 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.307966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.308011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308025 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.308038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.308064 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.308077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308090 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.308108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308137 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.308207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308248 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.308262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308279 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.308287 | orchestrator | 2025-06-02 17:59:29.308295 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-02 17:59:29.308302 | orchestrator | Monday 02 June 2025 17:54:41 +0000 (0:00:03.613) 0:04:53.815 *********** 2025-06-02 17:59:29.308315 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.308324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.308356 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308374 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.308383 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.308391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.308399 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308407 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.308419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.308445 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.308455 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308468 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.308476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308493 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.308501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308509 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308517 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.308544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.308561 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.308570 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.308582 | orchestrator | 2025-06-02 17:59:29.308595 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:59:29.308608 | orchestrator | Monday 02 June 2025 17:54:45 +0000 (0:00:03.483) 0:04:57.298 *********** 2025-06-02 17:59:29.308622 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.308635 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.308649 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.308660 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-02 17:59:29.308667 | orchestrator | 2025-06-02 17:59:29.308675 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-02 17:59:29.308683 | orchestrator | Monday 02 June 2025 17:54:46 +0000 (0:00:01.321) 0:04:58.619 *********** 2025-06-02 17:59:29.308691 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:59:29.308699 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:59:29.308706 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:59:29.308714 | orchestrator | 2025-06-02 17:59:29.308722 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-02 17:59:29.308730 | orchestrator | Monday 02 June 2025 17:54:48 +0000 (0:00:02.296) 0:05:00.916 *********** 2025-06-02 17:59:29.308737 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-02 17:59:29.308745 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:59:29.308753 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-02 17:59:29.308761 | orchestrator | 2025-06-02 17:59:29.308768 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-02 17:59:29.308776 | orchestrator | Monday 02 June 2025 17:54:51 +0000 (0:00:02.380) 0:05:03.297 *********** 2025-06-02 17:59:29.308784 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:59:29.308792 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:59:29.308800 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:59:29.308808 | orchestrator | 2025-06-02 17:59:29.308816 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-02 17:59:29.308823 | orchestrator | Monday 02 June 2025 17:54:51 +0000 (0:00:00.643) 0:05:03.941 *********** 2025-06-02 17:59:29.308831 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:59:29.308839 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:59:29.308846 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:59:29.308854 | orchestrator | 2025-06-02 17:59:29.308862 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-02 17:59:29.308870 | orchestrator | Monday 02 June 2025 17:54:52 +0000 (0:00:01.092) 0:05:05.033 *********** 2025-06-02 17:59:29.308878 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:59:29.308885 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:59:29.308893 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:59:29.308901 | orchestrator | 2025-06-02 17:59:29.309002 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-02 17:59:29.309028 | orchestrator | Monday 02 June 2025 17:54:54 +0000 (0:00:01.732) 0:05:06.766 *********** 2025-06-02 17:59:29.309036 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:59:29.309043 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:59:29.309051 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:59:29.309065 | orchestrator | 2025-06-02 17:59:29.309075 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-02 17:59:29.309088 | orchestrator | Monday 02 June 2025 17:54:56 +0000 (0:00:01.530) 0:05:08.296 *********** 2025-06-02 17:59:29.309100 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-02 17:59:29.309112 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-02 17:59:29.309132 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-02 17:59:29.309145 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-02 17:59:29.309158 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-02 17:59:29.309171 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-02 17:59:29.309196 | orchestrator | 2025-06-02 17:59:29.309204 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-02 17:59:29.309212 | orchestrator | Monday 02 June 2025 17:55:03 +0000 (0:00:07.251) 0:05:15.548 *********** 2025-06-02 17:59:29.309220 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.309228 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.309236 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.309244 | orchestrator | 2025-06-02 17:59:29.309251 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-02 17:59:29.309259 | orchestrator | Monday 02 June 2025 17:55:04 +0000 (0:00:00.556) 0:05:16.105 *********** 2025-06-02 17:59:29.309267 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.309275 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.309283 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.309291 | orchestrator | 2025-06-02 17:59:29.309298 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-02 17:59:29.309306 | orchestrator | Monday 02 June 2025 17:55:04 +0000 (0:00:00.665) 0:05:16.770 *********** 2025-06-02 17:59:29.309314 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.309354 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.309363 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.309371 | orchestrator | 2025-06-02 17:59:29.309379 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-02 17:59:29.309387 | orchestrator | Monday 02 June 2025 17:55:08 +0000 (0:00:03.521) 0:05:20.291 *********** 2025-06-02 17:59:29.309395 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:59:29.309404 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:59:29.309412 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-02 17:59:29.309420 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:59:29.309428 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:59:29.309436 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-02 17:59:29.309444 | orchestrator | 2025-06-02 17:59:29.309452 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-02 17:59:29.309460 | orchestrator | Monday 02 June 2025 17:55:12 +0000 (0:00:04.667) 0:05:24.959 *********** 2025-06-02 17:59:29.309468 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:59:29.309476 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:59:29.309484 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:59:29.309492 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-02 17:59:29.309499 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.309514 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-02 17:59:29.309522 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.309530 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-02 17:59:29.309538 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.309546 | orchestrator | 2025-06-02 17:59:29.309554 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-02 17:59:29.309562 | orchestrator | Monday 02 June 2025 17:55:16 +0000 (0:00:03.582) 0:05:28.542 *********** 2025-06-02 17:59:29.309570 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.309577 | orchestrator | 2025-06-02 17:59:29.309585 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-02 17:59:29.309593 | orchestrator | Monday 02 June 2025 17:55:16 +0000 (0:00:00.119) 0:05:28.662 *********** 2025-06-02 17:59:29.309601 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.309609 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.309617 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.309624 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.309632 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.309640 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.309648 | orchestrator | 2025-06-02 17:59:29.309655 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-02 17:59:29.309663 | orchestrator | Monday 02 June 2025 17:55:17 +0000 (0:00:01.009) 0:05:29.671 *********** 2025-06-02 17:59:29.309671 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-02 17:59:29.309679 | orchestrator | 2025-06-02 17:59:29.309687 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-02 17:59:29.309694 | orchestrator | Monday 02 June 2025 17:55:19 +0000 (0:00:01.433) 0:05:31.105 *********** 2025-06-02 17:59:29.309702 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.309710 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.309718 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.309726 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.309734 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.309741 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.309749 | orchestrator | 2025-06-02 17:59:29.309757 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-02 17:59:29.309764 | orchestrator | Monday 02 June 2025 17:55:19 +0000 (0:00:00.728) 0:05:31.833 *********** 2025-06-02 17:59:29.309778 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309808 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309875 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.309961 | orchestrator | 2025-06-02 17:59:29.309969 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-02 17:59:29.309977 | orchestrator | Monday 02 June 2025 17:55:24 +0000 (0:00:05.088) 0:05:36.922 *********** 2025-06-02 17:59:29.309985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.309994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.310006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.310014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.310088 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.310104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.310117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310290 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.310315 | orchestrator | 2025-06-02 17:59:29.310323 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-02 17:59:29.310331 | orchestrator | Monday 02 June 2025 17:55:31 +0000 (0:00:06.986) 0:05:43.909 *********** 2025-06-02 17:59:29.310338 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.310346 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.310354 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.310367 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.310375 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.310382 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.310390 | orchestrator | 2025-06-02 17:59:29.310398 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-02 17:59:29.310406 | orchestrator | Monday 02 June 2025 17:55:33 +0000 (0:00:01.710) 0:05:45.619 *********** 2025-06-02 17:59:29.310414 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:59:29.310422 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:59:29.310429 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-02 17:59:29.310443 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:59:29.310451 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:59:29.310459 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-02 17:59:29.310467 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:59:29.310475 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.310483 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:59:29.310491 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.310498 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-02 17:59:29.310506 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.310514 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:59:29.310522 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:59:29.310529 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-02 17:59:29.310537 | orchestrator | 2025-06-02 17:59:29.310545 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-02 17:59:29.310553 | orchestrator | Monday 02 June 2025 17:55:37 +0000 (0:00:03.917) 0:05:49.537 *********** 2025-06-02 17:59:29.310560 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.310568 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.310576 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.310583 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.310591 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.310599 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.310606 | orchestrator | 2025-06-02 17:59:29.310614 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-02 17:59:29.310622 | orchestrator | Monday 02 June 2025 17:55:38 +0000 (0:00:00.883) 0:05:50.421 *********** 2025-06-02 17:59:29.310629 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:59:29.310637 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:59:29.310645 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-02 17:59:29.310653 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:59:29.310661 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:59:29.310668 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-02 17:59:29.310676 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310689 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310697 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310704 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310712 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.310722 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310735 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.310747 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-02 17:59:29.310759 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.310777 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310789 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310799 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310807 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310814 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310820 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-02 17:59:29.310827 | orchestrator | 2025-06-02 17:59:29.310834 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-02 17:59:29.310840 | orchestrator | Monday 02 June 2025 17:55:43 +0000 (0:00:05.440) 0:05:55.862 *********** 2025-06-02 17:59:29.310847 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:59:29.310858 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:59:29.310865 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-02 17:59:29.310871 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:59:29.310878 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:59:29.310884 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:59:29.310891 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:59:29.310898 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-02 17:59:29.310904 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-02 17:59:29.310911 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:59:29.310917 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:59:29.310924 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-02 17:59:29.310930 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:59:29.310937 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:59:29.310943 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:59:29.310950 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.310957 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:59:29.310968 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.310975 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-02 17:59:29.310981 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-02 17:59:29.310988 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.310994 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:59:29.311001 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:59:29.311007 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-02 17:59:29.311014 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:59:29.311020 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:59:29.311027 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-02 17:59:29.311033 | orchestrator | 2025-06-02 17:59:29.311040 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-02 17:59:29.311046 | orchestrator | Monday 02 June 2025 17:55:52 +0000 (0:00:08.675) 0:06:04.537 *********** 2025-06-02 17:59:29.311053 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.311059 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.311066 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.311072 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311079 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311085 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311092 | orchestrator | 2025-06-02 17:59:29.311098 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-02 17:59:29.311105 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:00.592) 0:06:05.130 *********** 2025-06-02 17:59:29.311112 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.311118 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.311125 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.311132 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311138 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311145 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311151 | orchestrator | 2025-06-02 17:59:29.311158 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-02 17:59:29.311168 | orchestrator | Monday 02 June 2025 17:55:53 +0000 (0:00:00.908) 0:06:06.039 *********** 2025-06-02 17:59:29.311174 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311199 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311210 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311217 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.311223 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.311230 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.311236 | orchestrator | 2025-06-02 17:59:29.311243 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-02 17:59:29.311249 | orchestrator | Monday 02 June 2025 17:55:56 +0000 (0:00:02.334) 0:06:08.373 *********** 2025-06-02 17:59:29.311261 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.311274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.311281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311288 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.311295 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.311305 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.311312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311319 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.311331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.311351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311363 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.311385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311392 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311403 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-02 17:59:29.311410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-02 17:59:29.311422 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311435 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.311442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-02 17:59:29.311449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-02 17:59:29.311456 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311463 | orchestrator | 2025-06-02 17:59:29.311469 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-02 17:59:29.311476 | orchestrator | Monday 02 June 2025 17:55:59 +0000 (0:00:03.442) 0:06:11.816 *********** 2025-06-02 17:59:29.311483 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 17:59:29.311489 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311496 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.311502 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 17:59:29.311509 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311515 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.311522 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 17:59:29.311528 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311535 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.311541 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 17:59:29.311548 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311555 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311561 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 17:59:29.311568 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311574 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311581 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 17:59:29.311588 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 17:59:29.311594 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311601 | orchestrator | 2025-06-02 17:59:29.311607 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-02 17:59:29.311614 | orchestrator | Monday 02 June 2025 17:56:00 +0000 (0:00:00.955) 0:06:12.772 *********** 2025-06-02 17:59:29.311628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311641 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311673 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311697 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311704 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-02 17:59:29.311768 | orchestrator | 2025-06-02 17:59:29.311774 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-02 17:59:29.311781 | orchestrator | Monday 02 June 2025 17:56:04 +0000 (0:00:03.401) 0:06:16.173 *********** 2025-06-02 17:59:29.311788 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.311794 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.311801 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.311808 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.311814 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.311821 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.311827 | orchestrator | 2025-06-02 17:59:29.311834 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311841 | orchestrator | Monday 02 June 2025 17:56:04 +0000 (0:00:00.717) 0:06:16.890 *********** 2025-06-02 17:59:29.311847 | orchestrator | 2025-06-02 17:59:29.311854 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311860 | orchestrator | Monday 02 June 2025 17:56:05 +0000 (0:00:00.515) 0:06:17.406 *********** 2025-06-02 17:59:29.311867 | orchestrator | 2025-06-02 17:59:29.311873 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311880 | orchestrator | Monday 02 June 2025 17:56:05 +0000 (0:00:00.254) 0:06:17.660 *********** 2025-06-02 17:59:29.311887 | orchestrator | 2025-06-02 17:59:29.311893 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311904 | orchestrator | Monday 02 June 2025 17:56:05 +0000 (0:00:00.181) 0:06:17.841 *********** 2025-06-02 17:59:29.311911 | orchestrator | 2025-06-02 17:59:29.311917 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311924 | orchestrator | Monday 02 June 2025 17:56:05 +0000 (0:00:00.162) 0:06:18.004 *********** 2025-06-02 17:59:29.311930 | orchestrator | 2025-06-02 17:59:29.311937 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-02 17:59:29.311943 | orchestrator | Monday 02 June 2025 17:56:06 +0000 (0:00:00.147) 0:06:18.151 *********** 2025-06-02 17:59:29.311950 | orchestrator | 2025-06-02 17:59:29.311957 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-02 17:59:29.311963 | orchestrator | Monday 02 June 2025 17:56:06 +0000 (0:00:00.139) 0:06:18.291 *********** 2025-06-02 17:59:29.311970 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.311976 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.311984 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.311995 | orchestrator | 2025-06-02 17:59:29.312005 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-02 17:59:29.312015 | orchestrator | Monday 02 June 2025 17:56:24 +0000 (0:00:18.431) 0:06:36.722 *********** 2025-06-02 17:59:29.312025 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.312035 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.312046 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.312056 | orchestrator | 2025-06-02 17:59:29.312066 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-02 17:59:29.312081 | orchestrator | Monday 02 June 2025 17:56:38 +0000 (0:00:13.939) 0:06:50.661 *********** 2025-06-02 17:59:29.312092 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.312103 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.312113 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.312125 | orchestrator | 2025-06-02 17:59:29.312134 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-02 17:59:29.312141 | orchestrator | Monday 02 June 2025 17:57:01 +0000 (0:00:22.767) 0:07:13.429 *********** 2025-06-02 17:59:29.312147 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.312154 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.312160 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.312167 | orchestrator | 2025-06-02 17:59:29.312174 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-02 17:59:29.312197 | orchestrator | Monday 02 June 2025 17:57:41 +0000 (0:00:39.927) 0:07:53.357 *********** 2025-06-02 17:59:29.312204 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.312210 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.312217 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.312223 | orchestrator | 2025-06-02 17:59:29.312230 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-02 17:59:29.312237 | orchestrator | Monday 02 June 2025 17:57:42 +0000 (0:00:01.139) 0:07:54.497 *********** 2025-06-02 17:59:29.312243 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.312250 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.312257 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.312263 | orchestrator | 2025-06-02 17:59:29.312275 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-02 17:59:29.312282 | orchestrator | Monday 02 June 2025 17:57:43 +0000 (0:00:00.813) 0:07:55.310 *********** 2025-06-02 17:59:29.312289 | orchestrator | changed: [testbed-node-5] 2025-06-02 17:59:29.312296 | orchestrator | changed: [testbed-node-3] 2025-06-02 17:59:29.312302 | orchestrator | changed: [testbed-node-4] 2025-06-02 17:59:29.312309 | orchestrator | 2025-06-02 17:59:29.312315 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-02 17:59:29.312322 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:29.901) 0:08:25.212 *********** 2025-06-02 17:59:29.312329 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.312341 | orchestrator | 2025-06-02 17:59:29.312348 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-02 17:59:29.312355 | orchestrator | Monday 02 June 2025 17:58:13 +0000 (0:00:00.140) 0:08:25.352 *********** 2025-06-02 17:59:29.312361 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.312368 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.312374 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.312381 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.312387 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.312394 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-02 17:59:29.312401 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:59:29.312407 | orchestrator | 2025-06-02 17:59:29.312414 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-02 17:59:29.312421 | orchestrator | Monday 02 June 2025 17:58:37 +0000 (0:00:24.199) 0:08:49.552 *********** 2025-06-02 17:59:29.312427 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.312434 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.312440 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.312447 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.312453 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.312460 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.312466 | orchestrator | 2025-06-02 17:59:29.312473 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-02 17:59:29.312480 | orchestrator | Monday 02 June 2025 17:58:49 +0000 (0:00:11.758) 0:09:01.310 *********** 2025-06-02 17:59:29.312486 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.312493 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.312499 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.312506 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.312513 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.312519 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-06-02 17:59:29.312526 | orchestrator | 2025-06-02 17:59:29.312532 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-02 17:59:29.312539 | orchestrator | Monday 02 June 2025 17:58:53 +0000 (0:00:04.016) 0:09:05.327 *********** 2025-06-02 17:59:29.312546 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:59:29.312552 | orchestrator | 2025-06-02 17:59:29.312559 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-02 17:59:29.312565 | orchestrator | Monday 02 June 2025 17:59:05 +0000 (0:00:12.120) 0:09:17.447 *********** 2025-06-02 17:59:29.312572 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:59:29.312578 | orchestrator | 2025-06-02 17:59:29.312585 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-02 17:59:29.312592 | orchestrator | Monday 02 June 2025 17:59:06 +0000 (0:00:01.308) 0:09:18.755 *********** 2025-06-02 17:59:29.312598 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.312605 | orchestrator | 2025-06-02 17:59:29.312611 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-02 17:59:29.312618 | orchestrator | Monday 02 June 2025 17:59:08 +0000 (0:00:01.493) 0:09:20.248 *********** 2025-06-02 17:59:29.312625 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-06-02 17:59:29.312631 | orchestrator | 2025-06-02 17:59:29.312638 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-02 17:59:29.312644 | orchestrator | Monday 02 June 2025 17:59:19 +0000 (0:00:11.110) 0:09:31.358 *********** 2025-06-02 17:59:29.312651 | orchestrator | ok: [testbed-node-3] 2025-06-02 17:59:29.312658 | orchestrator | ok: [testbed-node-4] 2025-06-02 17:59:29.312664 | orchestrator | ok: [testbed-node-5] 2025-06-02 17:59:29.312671 | orchestrator | ok: [testbed-node-0] 2025-06-02 17:59:29.312678 | orchestrator | ok: [testbed-node-1] 2025-06-02 17:59:29.312690 | orchestrator | ok: [testbed-node-2] 2025-06-02 17:59:29.312696 | orchestrator | 2025-06-02 17:59:29.312706 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-02 17:59:29.312713 | orchestrator | 2025-06-02 17:59:29.312720 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-02 17:59:29.312726 | orchestrator | Monday 02 June 2025 17:59:21 +0000 (0:00:01.871) 0:09:33.229 *********** 2025-06-02 17:59:29.312733 | orchestrator | changed: [testbed-node-0] 2025-06-02 17:59:29.312740 | orchestrator | changed: [testbed-node-1] 2025-06-02 17:59:29.312746 | orchestrator | changed: [testbed-node-2] 2025-06-02 17:59:29.312753 | orchestrator | 2025-06-02 17:59:29.312759 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-02 17:59:29.312766 | orchestrator | 2025-06-02 17:59:29.312772 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-02 17:59:29.312779 | orchestrator | Monday 02 June 2025 17:59:22 +0000 (0:00:01.133) 0:09:34.363 *********** 2025-06-02 17:59:29.312786 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.312792 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.312799 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.312805 | orchestrator | 2025-06-02 17:59:29.312812 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-02 17:59:29.312819 | orchestrator | 2025-06-02 17:59:29.312825 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-02 17:59:29.312832 | orchestrator | Monday 02 June 2025 17:59:22 +0000 (0:00:00.505) 0:09:34.869 *********** 2025-06-02 17:59:29.312842 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-02 17:59:29.312849 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-02 17:59:29.312856 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-02 17:59:29.312862 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-02 17:59:29.312869 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-02 17:59:29.312875 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.312882 | orchestrator | skipping: [testbed-node-3] 2025-06-02 17:59:29.312889 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-02 17:59:29.312895 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-02 17:59:29.312902 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-02 17:59:29.312908 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-02 17:59:29.312915 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-02 17:59:29.312921 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.312928 | orchestrator | skipping: [testbed-node-4] 2025-06-02 17:59:29.312935 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-02 17:59:29.312941 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-02 17:59:29.312948 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-02 17:59:29.312954 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-02 17:59:29.312961 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-02 17:59:29.312967 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.312974 | orchestrator | skipping: [testbed-node-5] 2025-06-02 17:59:29.312981 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-02 17:59:29.312987 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-02 17:59:29.312994 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-02 17:59:29.313000 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-02 17:59:29.313007 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-02 17:59:29.313013 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.313025 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.313032 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-02 17:59:29.313038 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-02 17:59:29.313045 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-02 17:59:29.313052 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-02 17:59:29.313058 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-02 17:59:29.313065 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.313071 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.313078 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-02 17:59:29.313084 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-02 17:59:29.313091 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-02 17:59:29.313098 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-02 17:59:29.313104 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-02 17:59:29.313111 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-02 17:59:29.313117 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.313124 | orchestrator | 2025-06-02 17:59:29.313131 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-02 17:59:29.313137 | orchestrator | 2025-06-02 17:59:29.313144 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-02 17:59:29.313150 | orchestrator | Monday 02 June 2025 17:59:24 +0000 (0:00:01.405) 0:09:36.274 *********** 2025-06-02 17:59:29.313157 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-02 17:59:29.313164 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-02 17:59:29.313170 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.313176 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-02 17:59:29.313229 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-02 17:59:29.313239 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.313246 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-02 17:59:29.313253 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-02 17:59:29.313259 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.313266 | orchestrator | 2025-06-02 17:59:29.313272 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-02 17:59:29.313279 | orchestrator | 2025-06-02 17:59:29.313286 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-02 17:59:29.313293 | orchestrator | Monday 02 June 2025 17:59:24 +0000 (0:00:00.761) 0:09:37.036 *********** 2025-06-02 17:59:29.313299 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.313306 | orchestrator | 2025-06-02 17:59:29.313312 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-02 17:59:29.313323 | orchestrator | 2025-06-02 17:59:29.313335 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-02 17:59:29.313345 | orchestrator | Monday 02 June 2025 17:59:25 +0000 (0:00:00.697) 0:09:37.733 *********** 2025-06-02 17:59:29.313356 | orchestrator | skipping: [testbed-node-0] 2025-06-02 17:59:29.313366 | orchestrator | skipping: [testbed-node-1] 2025-06-02 17:59:29.313378 | orchestrator | skipping: [testbed-node-2] 2025-06-02 17:59:29.313389 | orchestrator | 2025-06-02 17:59:29.313401 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 17:59:29.313420 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 17:59:29.313429 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-02 17:59:29.313436 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 17:59:29.313448 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-02 17:59:29.313455 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-02 17:59:29.313461 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-06-02 17:59:29.313468 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-02 17:59:29.313474 | orchestrator | 2025-06-02 17:59:29.313480 | orchestrator | 2025-06-02 17:59:29.313486 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 17:59:29.313492 | orchestrator | Monday 02 June 2025 17:59:26 +0000 (0:00:00.416) 0:09:38.150 *********** 2025-06-02 17:59:29.313498 | orchestrator | =============================================================================== 2025-06-02 17:59:29.313504 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 39.93s 2025-06-02 17:59:29.313510 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.85s 2025-06-02 17:59:29.313517 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 29.90s 2025-06-02 17:59:29.313523 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 24.20s 2025-06-02 17:59:29.313529 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.77s 2025-06-02 17:59:29.313535 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.98s 2025-06-02 17:59:29.313541 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.49s 2025-06-02 17:59:29.313547 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 19.14s 2025-06-02 17:59:29.313553 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 18.43s 2025-06-02 17:59:29.313559 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.11s 2025-06-02 17:59:29.313565 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 13.94s 2025-06-02 17:59:29.313571 | orchestrator | nova : Restart nova-api container -------------------------------------- 12.77s 2025-06-02 17:59:29.313577 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.61s 2025-06-02 17:59:29.313583 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.60s 2025-06-02 17:59:29.313589 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.12s 2025-06-02 17:59:29.313595 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 11.76s 2025-06-02 17:59:29.313601 | orchestrator | nova-cell : Create cell ------------------------------------------------ 11.39s 2025-06-02 17:59:29.313607 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.11s 2025-06-02 17:59:29.313613 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.66s 2025-06-02 17:59:29.313619 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 8.98s 2025-06-02 17:59:32.346315 | orchestrator | 2025-06-02 17:59:32 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:32.348623 | orchestrator | 2025-06-02 17:59:32 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:32.348665 | orchestrator | 2025-06-02 17:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:35.398854 | orchestrator | 2025-06-02 17:59:35 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:35.402293 | orchestrator | 2025-06-02 17:59:35 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:35.402401 | orchestrator | 2025-06-02 17:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:38.455427 | orchestrator | 2025-06-02 17:59:38 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:38.459707 | orchestrator | 2025-06-02 17:59:38 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:38.459734 | orchestrator | 2025-06-02 17:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:41.506587 | orchestrator | 2025-06-02 17:59:41 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:41.508643 | orchestrator | 2025-06-02 17:59:41 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state STARTED 2025-06-02 17:59:41.508710 | orchestrator | 2025-06-02 17:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:44.561721 | orchestrator | 2025-06-02 17:59:44 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:44.563019 | orchestrator | 2025-06-02 17:59:44 | INFO  | Task 8167152c-b66b-4ee2-b39a-3ffc65524503 is in state SUCCESS 2025-06-02 17:59:44.563063 | orchestrator | 2025-06-02 17:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:47.604201 | orchestrator | 2025-06-02 17:59:47 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:47.604291 | orchestrator | 2025-06-02 17:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:50.650511 | orchestrator | 2025-06-02 17:59:50 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:50.650599 | orchestrator | 2025-06-02 17:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:53.696060 | orchestrator | 2025-06-02 17:59:53 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:53.696152 | orchestrator | 2025-06-02 17:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:56.748340 | orchestrator | 2025-06-02 17:59:56 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:56.748439 | orchestrator | 2025-06-02 17:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-02 17:59:59.790107 | orchestrator | 2025-06-02 17:59:59 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 17:59:59.790222 | orchestrator | 2025-06-02 17:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:02.837605 | orchestrator | 2025-06-02 18:00:02 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:02.837683 | orchestrator | 2025-06-02 18:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:05.876465 | orchestrator | 2025-06-02 18:00:05 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:05.876551 | orchestrator | 2025-06-02 18:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:08.934403 | orchestrator | 2025-06-02 18:00:08 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:08.934493 | orchestrator | 2025-06-02 18:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:11.984109 | orchestrator | 2025-06-02 18:00:11 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:11.984299 | orchestrator | 2025-06-02 18:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:15.033669 | orchestrator | 2025-06-02 18:00:15 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:15.033836 | orchestrator | 2025-06-02 18:00:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:18.070848 | orchestrator | 2025-06-02 18:00:18 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:18.070922 | orchestrator | 2025-06-02 18:00:18 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:21.114890 | orchestrator | 2025-06-02 18:00:21 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:21.115014 | orchestrator | 2025-06-02 18:00:21 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:24.165633 | orchestrator | 2025-06-02 18:00:24 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:24.165726 | orchestrator | 2025-06-02 18:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:27.214634 | orchestrator | 2025-06-02 18:00:27 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:27.214738 | orchestrator | 2025-06-02 18:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:30.257768 | orchestrator | 2025-06-02 18:00:30 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:30.257838 | orchestrator | 2025-06-02 18:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:33.302688 | orchestrator | 2025-06-02 18:00:33 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:33.302813 | orchestrator | 2025-06-02 18:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:36.353941 | orchestrator | 2025-06-02 18:00:36 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:36.354320 | orchestrator | 2025-06-02 18:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:39.399729 | orchestrator | 2025-06-02 18:00:39 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:39.399852 | orchestrator | 2025-06-02 18:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:42.446503 | orchestrator | 2025-06-02 18:00:42 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:42.446606 | orchestrator | 2025-06-02 18:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:45.502191 | orchestrator | 2025-06-02 18:00:45 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:45.502267 | orchestrator | 2025-06-02 18:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:48.543403 | orchestrator | 2025-06-02 18:00:48 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:48.543503 | orchestrator | 2025-06-02 18:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:51.581383 | orchestrator | 2025-06-02 18:00:51 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:51.581483 | orchestrator | 2025-06-02 18:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:54.621812 | orchestrator | 2025-06-02 18:00:54 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:54.621916 | orchestrator | 2025-06-02 18:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:00:57.671662 | orchestrator | 2025-06-02 18:00:57 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:00:57.671841 | orchestrator | 2025-06-02 18:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:00.718749 | orchestrator | 2025-06-02 18:01:00 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:00.718852 | orchestrator | 2025-06-02 18:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:03.761497 | orchestrator | 2025-06-02 18:01:03 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:03.761613 | orchestrator | 2025-06-02 18:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:06.808347 | orchestrator | 2025-06-02 18:01:06 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:06.808451 | orchestrator | 2025-06-02 18:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:09.862345 | orchestrator | 2025-06-02 18:01:09 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:09.862459 | orchestrator | 2025-06-02 18:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:12.915177 | orchestrator | 2025-06-02 18:01:12 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:12.915267 | orchestrator | 2025-06-02 18:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:15.960362 | orchestrator | 2025-06-02 18:01:15 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:15.960469 | orchestrator | 2025-06-02 18:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:19.006540 | orchestrator | 2025-06-02 18:01:19 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:19.006636 | orchestrator | 2025-06-02 18:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:22.051981 | orchestrator | 2025-06-02 18:01:22 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:22.052058 | orchestrator | 2025-06-02 18:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:25.092432 | orchestrator | 2025-06-02 18:01:25 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:25.092563 | orchestrator | 2025-06-02 18:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:28.135797 | orchestrator | 2025-06-02 18:01:28 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:28.135904 | orchestrator | 2025-06-02 18:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:31.178320 | orchestrator | 2025-06-02 18:01:31 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:31.178407 | orchestrator | 2025-06-02 18:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:34.223799 | orchestrator | 2025-06-02 18:01:34 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:34.224206 | orchestrator | 2025-06-02 18:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:37.285692 | orchestrator | 2025-06-02 18:01:37 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:37.285791 | orchestrator | 2025-06-02 18:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:40.335565 | orchestrator | 2025-06-02 18:01:40 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:40.335646 | orchestrator | 2025-06-02 18:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:43.389284 | orchestrator | 2025-06-02 18:01:43 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:43.389360 | orchestrator | 2025-06-02 18:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:46.441831 | orchestrator | 2025-06-02 18:01:46 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:46.441970 | orchestrator | 2025-06-02 18:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:49.485280 | orchestrator | 2025-06-02 18:01:49 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:49.485384 | orchestrator | 2025-06-02 18:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:52.523520 | orchestrator | 2025-06-02 18:01:52 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:52.523628 | orchestrator | 2025-06-02 18:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:55.567876 | orchestrator | 2025-06-02 18:01:55 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:55.567980 | orchestrator | 2025-06-02 18:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:01:58.613705 | orchestrator | 2025-06-02 18:01:58 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:01:58.613820 | orchestrator | 2025-06-02 18:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:02:01.658416 | orchestrator | 2025-06-02 18:02:01 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:02:01.658496 | orchestrator | 2025-06-02 18:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:02:04.702099 | orchestrator | 2025-06-02 18:02:04 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:02:04.702212 | orchestrator | 2025-06-02 18:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:02:07.739909 | orchestrator | 2025-06-02 18:02:07 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:02:07.740001 | orchestrator | 2025-06-02 18:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:02:10.791224 | orchestrator | 2025-06-02 18:02:10 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state STARTED 2025-06-02 18:02:10.791560 | orchestrator | 2025-06-02 18:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-02 18:02:13.835658 | orchestrator | 2025-06-02 18:02:13 | INFO  | Task f7bdda07-5afe-48b0-8a08-87b12d3f3e1c is in state SUCCESS 2025-06-02 18:02:13.836898 | orchestrator | 2025-06-02 18:02:13.836945 | orchestrator | None 2025-06-02 18:02:13.836957 | orchestrator | 2025-06-02 18:02:13.836967 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 18:02:13.836978 | orchestrator | 2025-06-02 18:02:13.836988 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 18:02:13.836998 | orchestrator | Monday 02 June 2025 17:57:18 +0000 (0:00:00.277) 0:00:00.277 *********** 2025-06-02 18:02:13.837008 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.837019 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:02:13.837029 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:02:13.837072 | orchestrator | 2025-06-02 18:02:13.837090 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 18:02:13.837101 | orchestrator | Monday 02 June 2025 17:57:18 +0000 (0:00:00.361) 0:00:00.639 *********** 2025-06-02 18:02:13.837111 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-02 18:02:13.837122 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-02 18:02:13.837132 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-02 18:02:13.837141 | orchestrator | 2025-06-02 18:02:13.837151 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-02 18:02:13.837161 | orchestrator | 2025-06-02 18:02:13.837171 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.837180 | orchestrator | Monday 02 June 2025 17:57:19 +0000 (0:00:00.429) 0:00:01.069 *********** 2025-06-02 18:02:13.837213 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:02:13.837225 | orchestrator | 2025-06-02 18:02:13.837234 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-02 18:02:13.837244 | orchestrator | Monday 02 June 2025 17:57:19 +0000 (0:00:00.621) 0:00:01.691 *********** 2025-06-02 18:02:13.837254 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-02 18:02:13.837263 | orchestrator | 2025-06-02 18:02:13.837273 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-02 18:02:13.837283 | orchestrator | Monday 02 June 2025 17:57:23 +0000 (0:00:03.529) 0:00:05.221 *********** 2025-06-02 18:02:13.837292 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-02 18:02:13.837306 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-02 18:02:13.837317 | orchestrator | 2025-06-02 18:02:13.837326 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-02 18:02:13.837336 | orchestrator | Monday 02 June 2025 17:57:30 +0000 (0:00:06.859) 0:00:12.080 *********** 2025-06-02 18:02:13.837346 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-02 18:02:13.837356 | orchestrator | 2025-06-02 18:02:13.837365 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-02 18:02:13.837375 | orchestrator | Monday 02 June 2025 17:57:33 +0000 (0:00:03.432) 0:00:15.513 *********** 2025-06-02 18:02:13.837384 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-02 18:02:13.837394 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 18:02:13.837404 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-02 18:02:13.837413 | orchestrator | 2025-06-02 18:02:13.837430 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-02 18:02:13.837445 | orchestrator | Monday 02 June 2025 17:57:42 +0000 (0:00:08.526) 0:00:24.040 *********** 2025-06-02 18:02:13.837460 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-02 18:02:13.837476 | orchestrator | 2025-06-02 18:02:13.837492 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-02 18:02:13.837510 | orchestrator | Monday 02 June 2025 17:57:46 +0000 (0:00:03.832) 0:00:27.872 *********** 2025-06-02 18:02:13.837528 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 18:02:13.837544 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-02 18:02:13.837561 | orchestrator | 2025-06-02 18:02:13.837578 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-02 18:02:13.837594 | orchestrator | Monday 02 June 2025 17:57:53 +0000 (0:00:07.829) 0:00:35.702 *********** 2025-06-02 18:02:13.837610 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-02 18:02:13.837629 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-02 18:02:13.837648 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-02 18:02:13.837664 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-02 18:02:13.837678 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-02 18:02:13.837688 | orchestrator | 2025-06-02 18:02:13.837697 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.837707 | orchestrator | Monday 02 June 2025 17:58:09 +0000 (0:00:15.736) 0:00:51.438 *********** 2025-06-02 18:02:13.837723 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:02:13.837741 | orchestrator | 2025-06-02 18:02:13.837757 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-02 18:02:13.837773 | orchestrator | Monday 02 June 2025 17:58:10 +0000 (0:00:00.578) 0:00:52.017 *********** 2025-06-02 18:02:13.837788 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.837817 | orchestrator | 2025-06-02 18:02:13.837833 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-06-02 18:02:13.837850 | orchestrator | Monday 02 June 2025 17:58:15 +0000 (0:00:05.404) 0:00:57.422 *********** 2025-06-02 18:02:13.838245 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838262 | orchestrator | 2025-06-02 18:02:13.838272 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 18:02:13.838298 | orchestrator | Monday 02 June 2025 17:58:20 +0000 (0:00:04.704) 0:01:02.126 *********** 2025-06-02 18:02:13.838309 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.838319 | orchestrator | 2025-06-02 18:02:13.838328 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-06-02 18:02:13.838338 | orchestrator | Monday 02 June 2025 17:58:23 +0000 (0:00:03.363) 0:01:05.490 *********** 2025-06-02 18:02:13.838348 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 18:02:13.838357 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 18:02:13.838367 | orchestrator | 2025-06-02 18:02:13.838377 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-06-02 18:02:13.838386 | orchestrator | Monday 02 June 2025 17:58:34 +0000 (0:00:10.604) 0:01:16.095 *********** 2025-06-02 18:02:13.838396 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-06-02 18:02:13.838406 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-06-02 18:02:13.838420 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-06-02 18:02:13.838439 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-06-02 18:02:13.838455 | orchestrator | 2025-06-02 18:02:13.838471 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-06-02 18:02:13.838486 | orchestrator | Monday 02 June 2025 17:58:52 +0000 (0:00:18.418) 0:01:34.513 *********** 2025-06-02 18:02:13.838503 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838519 | orchestrator | 2025-06-02 18:02:13.838535 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-06-02 18:02:13.838550 | orchestrator | Monday 02 June 2025 17:58:57 +0000 (0:00:04.617) 0:01:39.131 *********** 2025-06-02 18:02:13.838560 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838570 | orchestrator | 2025-06-02 18:02:13.838579 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-06-02 18:02:13.838589 | orchestrator | Monday 02 June 2025 17:59:02 +0000 (0:00:05.455) 0:01:44.587 *********** 2025-06-02 18:02:13.838601 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.838617 | orchestrator | 2025-06-02 18:02:13.838632 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-06-02 18:02:13.838648 | orchestrator | Monday 02 June 2025 17:59:03 +0000 (0:00:00.264) 0:01:44.851 *********** 2025-06-02 18:02:13.838664 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838681 | orchestrator | 2025-06-02 18:02:13.838699 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.838716 | orchestrator | Monday 02 June 2025 17:59:07 +0000 (0:00:04.589) 0:01:49.441 *********** 2025-06-02 18:02:13.838731 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:02:13.838747 | orchestrator | 2025-06-02 18:02:13.838757 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-06-02 18:02:13.838766 | orchestrator | Monday 02 June 2025 17:59:08 +0000 (0:00:01.338) 0:01:50.780 *********** 2025-06-02 18:02:13.838776 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.838785 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.838795 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838816 | orchestrator | 2025-06-02 18:02:13.838826 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-06-02 18:02:13.838836 | orchestrator | Monday 02 June 2025 17:59:14 +0000 (0:00:05.606) 0:01:56.386 *********** 2025-06-02 18:02:13.838845 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.838896 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.838913 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838924 | orchestrator | 2025-06-02 18:02:13.838933 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-06-02 18:02:13.838943 | orchestrator | Monday 02 June 2025 17:59:19 +0000 (0:00:05.052) 0:02:01.438 *********** 2025-06-02 18:02:13.838953 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.838962 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.838972 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.838981 | orchestrator | 2025-06-02 18:02:13.838991 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-06-02 18:02:13.839001 | orchestrator | Monday 02 June 2025 17:59:20 +0000 (0:00:00.841) 0:02:02.280 *********** 2025-06-02 18:02:13.839010 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:02:13.839019 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.839029 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:02:13.839060 | orchestrator | 2025-06-02 18:02:13.839071 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-06-02 18:02:13.839194 | orchestrator | Monday 02 June 2025 17:59:22 +0000 (0:00:02.039) 0:02:04.320 *********** 2025-06-02 18:02:13.839211 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.839227 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.839242 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.839257 | orchestrator | 2025-06-02 18:02:13.839273 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-06-02 18:02:13.839287 | orchestrator | Monday 02 June 2025 17:59:23 +0000 (0:00:01.307) 0:02:05.628 *********** 2025-06-02 18:02:13.839301 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.839315 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.839329 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.839343 | orchestrator | 2025-06-02 18:02:13.839357 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-06-02 18:02:13.839372 | orchestrator | Monday 02 June 2025 17:59:25 +0000 (0:00:01.191) 0:02:06.820 *********** 2025-06-02 18:02:13.839387 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.839403 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.839419 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.839434 | orchestrator | 2025-06-02 18:02:13.839463 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-06-02 18:02:13.839881 | orchestrator | Monday 02 June 2025 17:59:27 +0000 (0:00:01.999) 0:02:08.819 *********** 2025-06-02 18:02:13.839910 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.839927 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.839942 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.840000 | orchestrator | 2025-06-02 18:02:13.840010 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-06-02 18:02:13.840020 | orchestrator | Monday 02 June 2025 17:59:28 +0000 (0:00:01.820) 0:02:10.639 *********** 2025-06-02 18:02:13.840030 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840065 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:02:13.840075 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:02:13.840084 | orchestrator | 2025-06-02 18:02:13.840094 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-06-02 18:02:13.840104 | orchestrator | Monday 02 June 2025 17:59:29 +0000 (0:00:00.605) 0:02:11.245 *********** 2025-06-02 18:02:13.840119 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:02:13.840135 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:02:13.840150 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840165 | orchestrator | 2025-06-02 18:02:13.840181 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.840215 | orchestrator | Monday 02 June 2025 17:59:32 +0000 (0:00:02.915) 0:02:14.160 *********** 2025-06-02 18:02:13.840256 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:02:13.840273 | orchestrator | 2025-06-02 18:02:13.840290 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-06-02 18:02:13.840303 | orchestrator | Monday 02 June 2025 17:59:33 +0000 (0:00:00.729) 0:02:14.890 *********** 2025-06-02 18:02:13.840313 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840322 | orchestrator | 2025-06-02 18:02:13.840332 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-06-02 18:02:13.840342 | orchestrator | Monday 02 June 2025 17:59:36 +0000 (0:00:03.779) 0:02:18.670 *********** 2025-06-02 18:02:13.840352 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840361 | orchestrator | 2025-06-02 18:02:13.840371 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-06-02 18:02:13.840380 | orchestrator | Monday 02 June 2025 17:59:39 +0000 (0:00:03.083) 0:02:21.754 *********** 2025-06-02 18:02:13.840390 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-06-02 18:02:13.840400 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-06-02 18:02:13.840409 | orchestrator | 2025-06-02 18:02:13.840422 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-06-02 18:02:13.840439 | orchestrator | Monday 02 June 2025 17:59:47 +0000 (0:00:07.056) 0:02:28.810 *********** 2025-06-02 18:02:13.840455 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840470 | orchestrator | 2025-06-02 18:02:13.840488 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-06-02 18:02:13.840506 | orchestrator | Monday 02 June 2025 17:59:50 +0000 (0:00:03.351) 0:02:32.161 *********** 2025-06-02 18:02:13.840523 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:02:13.840539 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:02:13.840551 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:02:13.840563 | orchestrator | 2025-06-02 18:02:13.840573 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-06-02 18:02:13.840584 | orchestrator | Monday 02 June 2025 17:59:50 +0000 (0:00:00.354) 0:02:32.516 *********** 2025-06-02 18:02:13.840600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.840663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.840687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.840701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.840712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.840723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.840734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840821 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.840867 | orchestrator | 2025-06-02 18:02:13.840877 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-06-02 18:02:13.840887 | orchestrator | Monday 02 June 2025 17:59:53 +0000 (0:00:02.768) 0:02:35.284 *********** 2025-06-02 18:02:13.840897 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.840907 | orchestrator | 2025-06-02 18:02:13.840938 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-06-02 18:02:13.840949 | orchestrator | Monday 02 June 2025 17:59:53 +0000 (0:00:00.357) 0:02:35.641 *********** 2025-06-02 18:02:13.840958 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.840968 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:02:13.840978 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:02:13.840987 | orchestrator | 2025-06-02 18:02:13.840997 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-06-02 18:02:13.841007 | orchestrator | Monday 02 June 2025 17:59:54 +0000 (0:00:00.316) 0:02:35.958 *********** 2025-06-02 18:02:13.841017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.841028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.841082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.841148 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.841189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.841201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.841211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.841242 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:02:13.841265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.841319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.841335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.841376 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:02:13.841390 | orchestrator | 2025-06-02 18:02:13.841403 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.841416 | orchestrator | Monday 02 June 2025 17:59:54 +0000 (0:00:00.690) 0:02:36.648 *********** 2025-06-02 18:02:13.841431 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:02:13.841447 | orchestrator | 2025-06-02 18:02:13.841462 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-06-02 18:02:13.841477 | orchestrator | Monday 02 June 2025 17:59:55 +0000 (0:00:00.567) 0:02:37.216 *********** 2025-06-02 18:02:13.841502 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.841557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '32025-06-02 18:02:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:13.841575 | orchestrator | 0'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.841593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.841609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.841625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.841652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.841668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841787 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841851 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.841866 | orchestrator | 2025-06-02 18:02:13.841880 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-06-02 18:02:13.841893 | orchestrator | Monday 02 June 2025 18:00:00 +0000 (0:00:05.351) 0:02:42.568 *********** 2025-06-02 18:02:13.841907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.841924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.841940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.841981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.841996 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.842093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.842116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.842133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.842191 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:02:13.842205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.842229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.842245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.842309 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:02:13.842326 | orchestrator | 2025-06-02 18:02:13.842341 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-06-02 18:02:13.842357 | orchestrator | Monday 02 June 2025 18:00:01 +0000 (0:00:00.708) 0:02:43.276 *********** 2025-06-02 18:02:13.842373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.842390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.842415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.842474 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.842489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.842504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.842520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.842578 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:02:13.842593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-02 18:02:13.842618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-02 18:02:13.842634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-02 18:02:13.842666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-02 18:02:13.842681 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:02:13.842697 | orchestrator | 2025-06-02 18:02:13.842714 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-06-02 18:02:13.842729 | orchestrator | Monday 02 June 2025 18:00:02 +0000 (0:00:00.905) 0:02:44.181 *********** 2025-06-02 18:02:13.842757 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.842785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.842801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.842818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.842835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.842860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.842877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.842906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.842925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.842943 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.842961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.842979 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843146 | orchestrator | 2025-06-02 18:02:13.843163 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-06-02 18:02:13.843180 | orchestrator | Monday 02 June 2025 18:00:07 +0000 (0:00:04.996) 0:02:49.178 *********** 2025-06-02 18:02:13.843196 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:02:13.843212 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:02:13.843230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-06-02 18:02:13.843246 | orchestrator | 2025-06-02 18:02:13.843262 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-06-02 18:02:13.843280 | orchestrator | Monday 02 June 2025 18:00:09 +0000 (0:00:01.928) 0:02:51.106 *********** 2025-06-02 18:02:13.843297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.843315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.843345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.843372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.843391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.843408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.843424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843529 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843560 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.843591 | orchestrator | 2025-06-02 18:02:13.843605 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-06-02 18:02:13.843628 | orchestrator | Monday 02 June 2025 18:00:26 +0000 (0:00:16.838) 0:03:07.945 *********** 2025-06-02 18:02:13.843640 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.843653 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.843666 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.843678 | orchestrator | 2025-06-02 18:02:13.843690 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-06-02 18:02:13.843703 | orchestrator | Monday 02 June 2025 18:00:27 +0000 (0:00:01.529) 0:03:09.474 *********** 2025-06-02 18:02:13.843723 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843738 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843751 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843764 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843777 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843789 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843801 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.843812 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.843824 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.843835 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:02:13.843846 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:02:13.843861 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:02:13.843872 | orchestrator | 2025-06-02 18:02:13.843883 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-06-02 18:02:13.843895 | orchestrator | Monday 02 June 2025 18:00:33 +0000 (0:00:05.532) 0:03:15.007 *********** 2025-06-02 18:02:13.843907 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843917 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843928 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.843939 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843950 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843961 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.843974 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.843985 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.843997 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.844008 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844019 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844031 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844070 | orchestrator | 2025-06-02 18:02:13.844082 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-06-02 18:02:13.844093 | orchestrator | Monday 02 June 2025 18:00:38 +0000 (0:00:05.227) 0:03:20.234 *********** 2025-06-02 18:02:13.844105 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.844116 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.844128 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-06-02 18:02:13.844140 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.844152 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.844165 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-06-02 18:02:13.844177 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.844190 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.844216 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-06-02 18:02:13.844230 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844242 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844254 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-06-02 18:02:13.844266 | orchestrator | 2025-06-02 18:02:13.844280 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-06-02 18:02:13.844292 | orchestrator | Monday 02 June 2025 18:00:43 +0000 (0:00:05.152) 0:03:25.386 *********** 2025-06-02 18:02:13.844307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.844335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.844350 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-02 18:02:13.844363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.844383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.844396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-06-02 18:02:13.844409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844470 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-06-02 18:02:13.844552 | orchestrator | 2025-06-02 18:02:13.844565 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-02 18:02:13.844578 | orchestrator | Monday 02 June 2025 18:00:47 +0000 (0:00:03.595) 0:03:28.981 *********** 2025-06-02 18:02:13.844591 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:02:13.844605 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:02:13.844613 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:02:13.844621 | orchestrator | 2025-06-02 18:02:13.844629 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-06-02 18:02:13.844637 | orchestrator | Monday 02 June 2025 18:00:47 +0000 (0:00:00.332) 0:03:29.314 *********** 2025-06-02 18:02:13.844644 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844652 | orchestrator | 2025-06-02 18:02:13.844660 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-06-02 18:02:13.844668 | orchestrator | Monday 02 June 2025 18:00:50 +0000 (0:00:02.568) 0:03:31.883 *********** 2025-06-02 18:02:13.844675 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844683 | orchestrator | 2025-06-02 18:02:13.844691 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-06-02 18:02:13.844699 | orchestrator | Monday 02 June 2025 18:00:52 +0000 (0:00:02.135) 0:03:34.018 *********** 2025-06-02 18:02:13.844706 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844714 | orchestrator | 2025-06-02 18:02:13.844728 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-06-02 18:02:13.844736 | orchestrator | Monday 02 June 2025 18:00:54 +0000 (0:00:02.195) 0:03:36.214 *********** 2025-06-02 18:02:13.844744 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844752 | orchestrator | 2025-06-02 18:02:13.844759 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-06-02 18:02:13.844767 | orchestrator | Monday 02 June 2025 18:00:56 +0000 (0:00:02.213) 0:03:38.428 *********** 2025-06-02 18:02:13.844790 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844798 | orchestrator | 2025-06-02 18:02:13.844814 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:02:13.844822 | orchestrator | Monday 02 June 2025 18:01:17 +0000 (0:00:21.022) 0:03:59.450 *********** 2025-06-02 18:02:13.844829 | orchestrator | 2025-06-02 18:02:13.844837 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:02:13.844845 | orchestrator | Monday 02 June 2025 18:01:17 +0000 (0:00:00.077) 0:03:59.528 *********** 2025-06-02 18:02:13.844853 | orchestrator | 2025-06-02 18:02:13.844861 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-06-02 18:02:13.844869 | orchestrator | Monday 02 June 2025 18:01:17 +0000 (0:00:00.085) 0:03:59.613 *********** 2025-06-02 18:02:13.844877 | orchestrator | 2025-06-02 18:02:13.844884 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-06-02 18:02:13.844892 | orchestrator | Monday 02 June 2025 18:01:17 +0000 (0:00:00.084) 0:03:59.698 *********** 2025-06-02 18:02:13.844900 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844907 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.844915 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.844923 | orchestrator | 2025-06-02 18:02:13.844930 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-06-02 18:02:13.844938 | orchestrator | Monday 02 June 2025 18:01:33 +0000 (0:00:15.931) 0:04:15.629 *********** 2025-06-02 18:02:13.844946 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.844954 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.844961 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.844969 | orchestrator | 2025-06-02 18:02:13.844977 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-06-02 18:02:13.844985 | orchestrator | Monday 02 June 2025 18:01:45 +0000 (0:00:11.877) 0:04:27.506 *********** 2025-06-02 18:02:13.844992 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.845000 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.845008 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.845016 | orchestrator | 2025-06-02 18:02:13.845023 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-06-02 18:02:13.845031 | orchestrator | Monday 02 June 2025 18:01:54 +0000 (0:00:08.534) 0:04:36.041 *********** 2025-06-02 18:02:13.845197 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.845212 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.845220 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.845228 | orchestrator | 2025-06-02 18:02:13.845236 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-06-02 18:02:13.845244 | orchestrator | Monday 02 June 2025 18:02:04 +0000 (0:00:10.611) 0:04:46.653 *********** 2025-06-02 18:02:13.845252 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:02:13.845260 | orchestrator | changed: [testbed-node-1] 2025-06-02 18:02:13.845268 | orchestrator | changed: [testbed-node-2] 2025-06-02 18:02:13.845276 | orchestrator | 2025-06-02 18:02:13.845284 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:02:13.845292 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-02 18:02:13.845302 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 18:02:13.845319 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-02 18:02:13.845327 | orchestrator | 2025-06-02 18:02:13.845335 | orchestrator | 2025-06-02 18:02:13.845343 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:02:13.845361 | orchestrator | Monday 02 June 2025 18:02:10 +0000 (0:00:05.794) 0:04:52.447 *********** 2025-06-02 18:02:13.845369 | orchestrator | =============================================================================== 2025-06-02 18:02:13.845377 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.02s 2025-06-02 18:02:13.845385 | orchestrator | octavia : Add rules for security groups -------------------------------- 18.42s 2025-06-02 18:02:13.845393 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.84s 2025-06-02 18:02:13.845401 | orchestrator | octavia : Restart octavia-api container -------------------------------- 15.93s 2025-06-02 18:02:13.845409 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.74s 2025-06-02 18:02:13.845429 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.88s 2025-06-02 18:02:13.845437 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.61s 2025-06-02 18:02:13.845454 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.60s 2025-06-02 18:02:13.845462 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.53s 2025-06-02 18:02:13.845470 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.53s 2025-06-02 18:02:13.845477 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.83s 2025-06-02 18:02:13.845485 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.06s 2025-06-02 18:02:13.845493 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.86s 2025-06-02 18:02:13.845500 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 5.79s 2025-06-02 18:02:13.845508 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.61s 2025-06-02 18:02:13.845516 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.53s 2025-06-02 18:02:13.845524 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.46s 2025-06-02 18:02:13.845531 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.40s 2025-06-02 18:02:13.845539 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.35s 2025-06-02 18:02:13.845547 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.23s 2025-06-02 18:02:16.882865 | orchestrator | 2025-06-02 18:02:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:19.925558 | orchestrator | 2025-06-02 18:02:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:22.971620 | orchestrator | 2025-06-02 18:02:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:26.016934 | orchestrator | 2025-06-02 18:02:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:29.049923 | orchestrator | 2025-06-02 18:02:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:32.090617 | orchestrator | 2025-06-02 18:02:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:35.131220 | orchestrator | 2025-06-02 18:02:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:38.168483 | orchestrator | 2025-06-02 18:02:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:41.208827 | orchestrator | 2025-06-02 18:02:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:44.256208 | orchestrator | 2025-06-02 18:02:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:47.292756 | orchestrator | 2025-06-02 18:02:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:50.336878 | orchestrator | 2025-06-02 18:02:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:53.377958 | orchestrator | 2025-06-02 18:02:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:56.426692 | orchestrator | 2025-06-02 18:02:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:02:59.467111 | orchestrator | 2025-06-02 18:02:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:03:02.518294 | orchestrator | 2025-06-02 18:03:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:03:05.556714 | orchestrator | 2025-06-02 18:03:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:03:08.595799 | orchestrator | 2025-06-02 18:03:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:03:11.641677 | orchestrator | 2025-06-02 18:03:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-02 18:03:14.686649 | orchestrator | 2025-06-02 18:03:14.963752 | orchestrator | 2025-06-02 18:03:14.967226 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Mon Jun 2 18:03:14 UTC 2025 2025-06-02 18:03:14.967271 | orchestrator | 2025-06-02 18:03:15.466626 | orchestrator | ok: Runtime: 0:35:42.835135 2025-06-02 18:03:15.751006 | 2025-06-02 18:03:15.751181 | TASK [Bootstrap services] 2025-06-02 18:03:16.582636 | orchestrator | 2025-06-02 18:03:16.582832 | orchestrator | # BOOTSTRAP 2025-06-02 18:03:16.582859 | orchestrator | 2025-06-02 18:03:16.582874 | orchestrator | + set -e 2025-06-02 18:03:16.582887 | orchestrator | + echo 2025-06-02 18:03:16.582901 | orchestrator | + echo '# BOOTSTRAP' 2025-06-02 18:03:16.582919 | orchestrator | + echo 2025-06-02 18:03:16.582965 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-02 18:03:16.592637 | orchestrator | + set -e 2025-06-02 18:03:16.592729 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-02 18:03:20.753171 | orchestrator | 2025-06-02 18:03:20 | INFO  | It takes a moment until task c25d1ecd-d352-4cef-ac50-87808be80079 (flavor-manager) has been started and output is visible here. 2025-06-02 18:03:25.098279 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-1V-4 created 2025-06-02 18:03:25.310733 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-2V-8 created 2025-06-02 18:03:25.516256 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-4V-16 created 2025-06-02 18:03:25.682172 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-8V-32 created 2025-06-02 18:03:25.810938 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-1V-2 created 2025-06-02 18:03:25.962805 | orchestrator | 2025-06-02 18:03:25 | INFO  | Flavor SCS-2V-4 created 2025-06-02 18:03:26.124650 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-4V-8 created 2025-06-02 18:03:26.286556 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-8V-16 created 2025-06-02 18:03:26.435189 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-16V-32 created 2025-06-02 18:03:26.587466 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-1V-8 created 2025-06-02 18:03:26.738935 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-2V-16 created 2025-06-02 18:03:26.897024 | orchestrator | 2025-06-02 18:03:26 | INFO  | Flavor SCS-4V-32 created 2025-06-02 18:03:27.049466 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-1L-1 created 2025-06-02 18:03:27.200429 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-2V-4-20s created 2025-06-02 18:03:27.381905 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-4V-16-100s created 2025-06-02 18:03:27.545749 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-1V-4-10 created 2025-06-02 18:03:27.689845 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-2V-8-20 created 2025-06-02 18:03:27.859068 | orchestrator | 2025-06-02 18:03:27 | INFO  | Flavor SCS-4V-16-50 created 2025-06-02 18:03:28.020229 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-8V-32-100 created 2025-06-02 18:03:28.158706 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-1V-2-5 created 2025-06-02 18:03:28.305282 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-2V-4-10 created 2025-06-02 18:03:28.452351 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-4V-8-20 created 2025-06-02 18:03:28.605156 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-8V-16-50 created 2025-06-02 18:03:28.769404 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-16V-32-100 created 2025-06-02 18:03:28.919644 | orchestrator | 2025-06-02 18:03:28 | INFO  | Flavor SCS-1V-8-20 created 2025-06-02 18:03:29.083516 | orchestrator | 2025-06-02 18:03:29 | INFO  | Flavor SCS-2V-16-50 created 2025-06-02 18:03:29.230950 | orchestrator | 2025-06-02 18:03:29 | INFO  | Flavor SCS-4V-32-100 created 2025-06-02 18:03:29.386901 | orchestrator | 2025-06-02 18:03:29 | INFO  | Flavor SCS-1L-1-5 created 2025-06-02 18:03:31.684721 | orchestrator | 2025-06-02 18:03:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-02 18:03:31.689812 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:03:31.689904 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:03:31.690112 | orchestrator | Registering Redlock._release_script 2025-06-02 18:03:31.750137 | orchestrator | 2025-06-02 18:03:31 | INFO  | Task 1b47d19c-a4bf-495b-bf94-6b8bb3813ec6 (bootstrap-basic) was prepared for execution. 2025-06-02 18:03:31.750236 | orchestrator | 2025-06-02 18:03:31 | INFO  | It takes a moment until task 1b47d19c-a4bf-495b-bf94-6b8bb3813ec6 (bootstrap-basic) has been started and output is visible here. 2025-06-02 18:03:36.142494 | orchestrator | 2025-06-02 18:03:36.143399 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-02 18:03:36.144728 | orchestrator | 2025-06-02 18:03:36.145606 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-02 18:03:36.147830 | orchestrator | Monday 02 June 2025 18:03:36 +0000 (0:00:00.093) 0:00:00.093 *********** 2025-06-02 18:03:38.033821 | orchestrator | ok: [localhost] 2025-06-02 18:03:38.034743 | orchestrator | 2025-06-02 18:03:38.034784 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-02 18:03:38.034928 | orchestrator | Monday 02 June 2025 18:03:38 +0000 (0:00:01.898) 0:00:01.992 *********** 2025-06-02 18:03:46.420089 | orchestrator | ok: [localhost] 2025-06-02 18:03:46.420866 | orchestrator | 2025-06-02 18:03:46.421167 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-02 18:03:46.422506 | orchestrator | Monday 02 June 2025 18:03:46 +0000 (0:00:08.383) 0:00:10.376 *********** 2025-06-02 18:03:52.955285 | orchestrator | changed: [localhost] 2025-06-02 18:03:52.955364 | orchestrator | 2025-06-02 18:03:52.956624 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-02 18:03:52.957562 | orchestrator | Monday 02 June 2025 18:03:52 +0000 (0:00:06.534) 0:00:16.910 *********** 2025-06-02 18:03:59.926896 | orchestrator | ok: [localhost] 2025-06-02 18:03:59.927101 | orchestrator | 2025-06-02 18:03:59.928735 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-02 18:03:59.929158 | orchestrator | Monday 02 June 2025 18:03:59 +0000 (0:00:06.972) 0:00:23.883 *********** 2025-06-02 18:04:07.098991 | orchestrator | changed: [localhost] 2025-06-02 18:04:07.099174 | orchestrator | 2025-06-02 18:04:07.099308 | orchestrator | TASK [Create public network] *************************************************** 2025-06-02 18:04:07.101591 | orchestrator | Monday 02 June 2025 18:04:07 +0000 (0:00:07.169) 0:00:31.053 *********** 2025-06-02 18:04:12.750258 | orchestrator | changed: [localhost] 2025-06-02 18:04:12.750427 | orchestrator | 2025-06-02 18:04:12.750552 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-02 18:04:12.751112 | orchestrator | Monday 02 June 2025 18:04:12 +0000 (0:00:05.653) 0:00:36.706 *********** 2025-06-02 18:04:18.939910 | orchestrator | changed: [localhost] 2025-06-02 18:04:18.940035 | orchestrator | 2025-06-02 18:04:18.940762 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-02 18:04:18.941157 | orchestrator | Monday 02 June 2025 18:04:18 +0000 (0:00:06.189) 0:00:42.895 *********** 2025-06-02 18:04:23.791429 | orchestrator | changed: [localhost] 2025-06-02 18:04:23.791631 | orchestrator | 2025-06-02 18:04:23.792204 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-02 18:04:23.792658 | orchestrator | Monday 02 June 2025 18:04:23 +0000 (0:00:04.852) 0:00:47.748 *********** 2025-06-02 18:04:27.678230 | orchestrator | changed: [localhost] 2025-06-02 18:04:27.678436 | orchestrator | 2025-06-02 18:04:27.678556 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-02 18:04:27.678580 | orchestrator | Monday 02 June 2025 18:04:27 +0000 (0:00:03.885) 0:00:51.634 *********** 2025-06-02 18:04:31.345756 | orchestrator | ok: [localhost] 2025-06-02 18:04:31.345887 | orchestrator | 2025-06-02 18:04:31.347073 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:04:31.347243 | orchestrator | 2025-06-02 18:04:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:04:31.347256 | orchestrator | 2025-06-02 18:04:31 | INFO  | Please wait and do not abort execution. 2025-06-02 18:04:31.348199 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 18:04:31.348538 | orchestrator | 2025-06-02 18:04:31.349282 | orchestrator | 2025-06-02 18:04:31.350746 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:04:31.351607 | orchestrator | Monday 02 June 2025 18:04:31 +0000 (0:00:03.668) 0:00:55.302 *********** 2025-06-02 18:04:31.353048 | orchestrator | =============================================================================== 2025-06-02 18:04:31.354192 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.38s 2025-06-02 18:04:31.354903 | orchestrator | Create volume type local ------------------------------------------------ 7.17s 2025-06-02 18:04:31.356596 | orchestrator | Get volume type local --------------------------------------------------- 6.97s 2025-06-02 18:04:31.357259 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.53s 2025-06-02 18:04:31.358124 | orchestrator | Set public network to default ------------------------------------------- 6.19s 2025-06-02 18:04:31.358618 | orchestrator | Create public network --------------------------------------------------- 5.65s 2025-06-02 18:04:31.359300 | orchestrator | Create public subnet ---------------------------------------------------- 4.85s 2025-06-02 18:04:31.360080 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.89s 2025-06-02 18:04:31.360171 | orchestrator | Create manager role ----------------------------------------------------- 3.67s 2025-06-02 18:04:31.360992 | orchestrator | Gathering Facts --------------------------------------------------------- 1.90s 2025-06-02 18:04:33.785457 | orchestrator | 2025-06-02 18:04:33 | INFO  | It takes a moment until task dcc88e82-5efb-42db-9307-9533d8e36faf (image-manager) has been started and output is visible here. 2025-06-02 18:04:37.496754 | orchestrator | 2025-06-02 18:04:37 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-02 18:04:37.713300 | orchestrator | 2025-06-02 18:04:37 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-02 18:04:37.714398 | orchestrator | 2025-06-02 18:04:37 | INFO  | Importing image Cirros 0.6.2 2025-06-02 18:04:37.714447 | orchestrator | 2025-06-02 18:04:37 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 18:04:39.672101 | orchestrator | 2025-06-02 18:04:39 | INFO  | Waiting for import to complete... 2025-06-02 18:04:49.847252 | orchestrator | 2025-06-02 18:04:49 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-02 18:04:50.042321 | orchestrator | 2025-06-02 18:04:50 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-02 18:04:50.042943 | orchestrator | 2025-06-02 18:04:50 | INFO  | Setting internal_version = 0.6.2 2025-06-02 18:04:50.043679 | orchestrator | 2025-06-02 18:04:50 | INFO  | Setting image_original_user = cirros 2025-06-02 18:04:50.045083 | orchestrator | 2025-06-02 18:04:50 | INFO  | Adding tag os:cirros 2025-06-02 18:04:50.367024 | orchestrator | 2025-06-02 18:04:50 | INFO  | Setting property architecture: x86_64 2025-06-02 18:04:50.597088 | orchestrator | 2025-06-02 18:04:50 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:04:50.857819 | orchestrator | 2025-06-02 18:04:50 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:04:51.083573 | orchestrator | 2025-06-02 18:04:51 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:04:51.282993 | orchestrator | 2025-06-02 18:04:51 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:04:51.503161 | orchestrator | 2025-06-02 18:04:51 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:04:51.734245 | orchestrator | 2025-06-02 18:04:51 | INFO  | Setting property os_distro: cirros 2025-06-02 18:04:51.930476 | orchestrator | 2025-06-02 18:04:51 | INFO  | Setting property replace_frequency: never 2025-06-02 18:04:52.184234 | orchestrator | 2025-06-02 18:04:52 | INFO  | Setting property uuid_validity: none 2025-06-02 18:04:52.382444 | orchestrator | 2025-06-02 18:04:52 | INFO  | Setting property provided_until: none 2025-06-02 18:04:52.620183 | orchestrator | 2025-06-02 18:04:52 | INFO  | Setting property image_description: Cirros 2025-06-02 18:04:52.866276 | orchestrator | 2025-06-02 18:04:52 | INFO  | Setting property image_name: Cirros 2025-06-02 18:04:53.130573 | orchestrator | 2025-06-02 18:04:53 | INFO  | Setting property internal_version: 0.6.2 2025-06-02 18:04:53.345698 | orchestrator | 2025-06-02 18:04:53 | INFO  | Setting property image_original_user: cirros 2025-06-02 18:04:53.587948 | orchestrator | 2025-06-02 18:04:53 | INFO  | Setting property os_version: 0.6.2 2025-06-02 18:04:53.823297 | orchestrator | 2025-06-02 18:04:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-02 18:04:54.049281 | orchestrator | 2025-06-02 18:04:54 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-02 18:04:54.276167 | orchestrator | 2025-06-02 18:04:54 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-02 18:04:54.278402 | orchestrator | 2025-06-02 18:04:54 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-02 18:04:54.279319 | orchestrator | 2025-06-02 18:04:54 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-02 18:04:54.505953 | orchestrator | 2025-06-02 18:04:54 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-02 18:04:54.704578 | orchestrator | 2025-06-02 18:04:54 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-02 18:04:54.705585 | orchestrator | 2025-06-02 18:04:54 | INFO  | Importing image Cirros 0.6.3 2025-06-02 18:04:54.706418 | orchestrator | 2025-06-02 18:04:54 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 18:04:55.889560 | orchestrator | 2025-06-02 18:04:55 | INFO  | Waiting for image to leave queued state... 2025-06-02 18:04:57.935272 | orchestrator | 2025-06-02 18:04:57 | INFO  | Waiting for import to complete... 2025-06-02 18:05:08.119249 | orchestrator | 2025-06-02 18:05:08 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-02 18:05:08.378110 | orchestrator | 2025-06-02 18:05:08 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-02 18:05:08.378260 | orchestrator | 2025-06-02 18:05:08 | INFO  | Setting internal_version = 0.6.3 2025-06-02 18:05:08.378328 | orchestrator | 2025-06-02 18:05:08 | INFO  | Setting image_original_user = cirros 2025-06-02 18:05:08.379464 | orchestrator | 2025-06-02 18:05:08 | INFO  | Adding tag os:cirros 2025-06-02 18:05:08.616669 | orchestrator | 2025-06-02 18:05:08 | INFO  | Setting property architecture: x86_64 2025-06-02 18:05:08.875203 | orchestrator | 2025-06-02 18:05:08 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:05:09.050310 | orchestrator | 2025-06-02 18:05:09 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:05:09.287466 | orchestrator | 2025-06-02 18:05:09 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:05:09.492655 | orchestrator | 2025-06-02 18:05:09 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:05:09.700105 | orchestrator | 2025-06-02 18:05:09 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:05:09.916941 | orchestrator | 2025-06-02 18:05:09 | INFO  | Setting property os_distro: cirros 2025-06-02 18:05:10.151172 | orchestrator | 2025-06-02 18:05:10 | INFO  | Setting property replace_frequency: never 2025-06-02 18:05:10.348184 | orchestrator | 2025-06-02 18:05:10 | INFO  | Setting property uuid_validity: none 2025-06-02 18:05:10.553963 | orchestrator | 2025-06-02 18:05:10 | INFO  | Setting property provided_until: none 2025-06-02 18:05:10.760549 | orchestrator | 2025-06-02 18:05:10 | INFO  | Setting property image_description: Cirros 2025-06-02 18:05:10.991907 | orchestrator | 2025-06-02 18:05:10 | INFO  | Setting property image_name: Cirros 2025-06-02 18:05:11.293685 | orchestrator | 2025-06-02 18:05:11 | INFO  | Setting property internal_version: 0.6.3 2025-06-02 18:05:11.572932 | orchestrator | 2025-06-02 18:05:11 | INFO  | Setting property image_original_user: cirros 2025-06-02 18:05:11.792504 | orchestrator | 2025-06-02 18:05:11 | INFO  | Setting property os_version: 0.6.3 2025-06-02 18:05:12.020005 | orchestrator | 2025-06-02 18:05:12 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-02 18:05:12.243630 | orchestrator | 2025-06-02 18:05:12 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-02 18:05:12.676345 | orchestrator | 2025-06-02 18:05:12 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-02 18:05:12.677551 | orchestrator | 2025-06-02 18:05:12 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-02 18:05:12.678549 | orchestrator | 2025-06-02 18:05:12 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-02 18:05:13.769796 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-02 18:05:15.738377 | orchestrator | 2025-06-02 18:05:15 | INFO  | date: 2025-06-02 2025-06-02 18:05:15.738483 | orchestrator | 2025-06-02 18:05:15 | INFO  | image: octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:05:15.738503 | orchestrator | 2025-06-02 18:05:15 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:05:15.738539 | orchestrator | 2025-06-02 18:05:15 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2.CHECKSUM 2025-06-02 18:05:15.760151 | orchestrator | 2025-06-02 18:05:15 | INFO  | checksum: 4244ae669e0302e4de8dd880cdee4c27c232e9d393dd18f3521b5d0e7c284b7c 2025-06-02 18:05:15.839009 | orchestrator | 2025-06-02 18:05:15 | INFO  | It takes a moment until task 734e9f7d-d95b-4e9d-93ca-638620c33185 (image-manager) has been started and output is visible here. 2025-06-02 18:05:16.081574 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-02 18:05:16.082072 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-02 18:05:18.335636 | orchestrator | 2025-06-02 18:05:18 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:05:18.353228 | orchestrator | 2025-06-02 18:05:18 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2: 200 2025-06-02 18:05:18.354269 | orchestrator | 2025-06-02 18:05:18 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-02 2025-06-02 18:05:18.355507 | orchestrator | 2025-06-02 18:05:18 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:05:18.759477 | orchestrator | 2025-06-02 18:05:18 | INFO  | Waiting for image to leave queued state... 2025-06-02 18:05:20.798131 | orchestrator | 2025-06-02 18:05:20 | INFO  | Waiting for import to complete... 2025-06-02 18:05:30.885331 | orchestrator | 2025-06-02 18:05:30 | INFO  | Waiting for import to complete... 2025-06-02 18:05:40.973456 | orchestrator | 2025-06-02 18:05:40 | INFO  | Waiting for import to complete... 2025-06-02 18:05:51.069677 | orchestrator | 2025-06-02 18:05:51 | INFO  | Waiting for import to complete... 2025-06-02 18:06:01.162271 | orchestrator | 2025-06-02 18:06:01 | INFO  | Waiting for import to complete... 2025-06-02 18:06:11.498355 | orchestrator | 2025-06-02 18:06:11 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-02' successfully completed, reloading images 2025-06-02 18:06:11.853068 | orchestrator | 2025-06-02 18:06:11 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:06:11.853271 | orchestrator | 2025-06-02 18:06:11 | INFO  | Setting internal_version = 2025-06-02 2025-06-02 18:06:11.854259 | orchestrator | 2025-06-02 18:06:11 | INFO  | Setting image_original_user = ubuntu 2025-06-02 18:06:11.855244 | orchestrator | 2025-06-02 18:06:11 | INFO  | Adding tag amphora 2025-06-02 18:06:12.075618 | orchestrator | 2025-06-02 18:06:12 | INFO  | Adding tag os:ubuntu 2025-06-02 18:06:12.298345 | orchestrator | 2025-06-02 18:06:12 | INFO  | Setting property architecture: x86_64 2025-06-02 18:06:12.521708 | orchestrator | 2025-06-02 18:06:12 | INFO  | Setting property hw_disk_bus: scsi 2025-06-02 18:06:12.732634 | orchestrator | 2025-06-02 18:06:12 | INFO  | Setting property hw_rng_model: virtio 2025-06-02 18:06:12.942576 | orchestrator | 2025-06-02 18:06:12 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-02 18:06:13.177018 | orchestrator | 2025-06-02 18:06:13 | INFO  | Setting property hw_watchdog_action: reset 2025-06-02 18:06:13.397943 | orchestrator | 2025-06-02 18:06:13 | INFO  | Setting property hypervisor_type: qemu 2025-06-02 18:06:13.621341 | orchestrator | 2025-06-02 18:06:13 | INFO  | Setting property os_distro: ubuntu 2025-06-02 18:06:13.850386 | orchestrator | 2025-06-02 18:06:13 | INFO  | Setting property replace_frequency: quarterly 2025-06-02 18:06:14.071553 | orchestrator | 2025-06-02 18:06:14 | INFO  | Setting property uuid_validity: last-1 2025-06-02 18:06:14.306749 | orchestrator | 2025-06-02 18:06:14 | INFO  | Setting property provided_until: none 2025-06-02 18:06:14.521488 | orchestrator | 2025-06-02 18:06:14 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-02 18:06:14.737171 | orchestrator | 2025-06-02 18:06:14 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-02 18:06:14.992214 | orchestrator | 2025-06-02 18:06:14 | INFO  | Setting property internal_version: 2025-06-02 2025-06-02 18:06:15.198662 | orchestrator | 2025-06-02 18:06:15 | INFO  | Setting property image_original_user: ubuntu 2025-06-02 18:06:15.394529 | orchestrator | 2025-06-02 18:06:15 | INFO  | Setting property os_version: 2025-06-02 2025-06-02 18:06:15.629333 | orchestrator | 2025-06-02 18:06:15 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250602.qcow2 2025-06-02 18:06:15.871038 | orchestrator | 2025-06-02 18:06:15 | INFO  | Setting property image_build_date: 2025-06-02 2025-06-02 18:06:16.085994 | orchestrator | 2025-06-02 18:06:16 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:06:16.086287 | orchestrator | 2025-06-02 18:06:16 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-02' 2025-06-02 18:06:16.273493 | orchestrator | 2025-06-02 18:06:16 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-02 18:06:16.275442 | orchestrator | 2025-06-02 18:06:16 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-02 18:06:16.278147 | orchestrator | 2025-06-02 18:06:16 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-02 18:06:16.278582 | orchestrator | 2025-06-02 18:06:16 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-02 18:06:16.928046 | orchestrator | ok: Runtime: 0:03:00.617717 2025-06-02 18:06:16.943622 | 2025-06-02 18:06:16.943793 | TASK [Run checks] 2025-06-02 18:06:17.700423 | orchestrator | + set -e 2025-06-02 18:06:17.700620 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:06:17.700643 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:06:17.700664 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:06:17.700678 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:06:17.700691 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:06:17.700704 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:06:17.701426 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:06:17.707465 | orchestrator | 2025-06-02 18:06:17.707617 | orchestrator | # CHECK 2025-06-02 18:06:17.707639 | orchestrator | 2025-06-02 18:06:17.707655 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 18:06:17.707675 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 18:06:17.707687 | orchestrator | + echo 2025-06-02 18:06:17.707698 | orchestrator | + echo '# CHECK' 2025-06-02 18:06:17.707709 | orchestrator | + echo 2025-06-02 18:06:17.707725 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:06:17.708192 | orchestrator | ++ semver latest 5.0.0 2025-06-02 18:06:17.768206 | orchestrator | 2025-06-02 18:06:17.768315 | orchestrator | ## Containers @ testbed-manager 2025-06-02 18:06:17.768334 | orchestrator | 2025-06-02 18:06:17.768349 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 18:06:17.768362 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 18:06:17.768375 | orchestrator | + echo 2025-06-02 18:06:17.768389 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-02 18:06:17.768403 | orchestrator | + echo 2025-06-02 18:06:17.768416 | orchestrator | + osism container testbed-manager ps 2025-06-02 18:06:19.916550 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:06:19.916686 | orchestrator | 79e03a46bd1c registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2 "dumb-init --single-…" 13 minutes ago Up 13 minutes prometheus_blackbox_exporter 2025-06-02 18:06:19.916726 | orchestrator | 773fc83b3064 registry.osism.tech/kolla/prometheus-alertmanager:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-06-02 18:06:19.916747 | orchestrator | 1fbddaee60a5 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:06:19.916758 | orchestrator | 73f33daece1c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 18:06:19.916769 | orchestrator | b1b8b41ffaa1 registry.osism.tech/kolla/prometheus-v2-server:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-06-02 18:06:19.916786 | orchestrator | 3ba3e70087dc registry.osism.tech/osism/cephclient:reef "/usr/bin/dumb-init …" 17 minutes ago Up 17 minutes cephclient 2025-06-02 18:06:19.916798 | orchestrator | 9f45604e1438 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-02 18:06:19.916877 | orchestrator | 35cd36f10756 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:06:19.916889 | orchestrator | f8776d43ac3e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 30 minutes fluentd 2025-06-02 18:06:19.916930 | orchestrator | 1869339c8f6e phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 30 minutes (healthy) 80/tcp phpmyadmin 2025-06-02 18:06:19.916942 | orchestrator | 02a8a80645dd registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 31 minutes openstackclient 2025-06-02 18:06:19.916953 | orchestrator | 940504bed115 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-06-02 18:06:19.916965 | orchestrator | e7117cbb8e09 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-02 18:06:19.916976 | orchestrator | 5106729379ca registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" 57 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-06-02 18:06:19.916988 | orchestrator | e1e6cf295c39 registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-06-02 18:06:19.917020 | orchestrator | bb2a711b26c0 registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-06-02 18:06:19.917039 | orchestrator | 88243c0f671d registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) osism-ansible 2025-06-02 18:06:19.917050 | orchestrator | 370ec57231fa registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-06-02 18:06:19.917061 | orchestrator | 1050c247bc47 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-02 18:06:19.917073 | orchestrator | b04c4116633e registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-06-02 18:06:19.917084 | orchestrator | 84072cc547a3 registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" 57 minutes ago Up 40 minutes (healthy) osismclient 2025-06-02 18:06:19.917095 | orchestrator | 54da6a0d4eaa registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-watchdog-1 2025-06-02 18:06:19.917106 | orchestrator | 36047ecb62c4 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-06-02 18:06:19.917117 | orchestrator | bd508453e334 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-06-02 18:06:19.917136 | orchestrator | cd0825cb36d0 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-02 18:06:19.917148 | orchestrator | 78733da12705 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-02 18:06:19.917159 | orchestrator | 87af06c8ee8b registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-06-02 18:06:19.917171 | orchestrator | 08b73d619d80 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-06-02 18:06:19.917182 | orchestrator | 83658a6334b9 registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 59 minutes ago Up 59 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-02 18:06:20.188365 | orchestrator | 2025-06-02 18:06:20.188463 | orchestrator | ## Images @ testbed-manager 2025-06-02 18:06:20.188484 | orchestrator | 2025-06-02 18:06:20.188493 | orchestrator | + echo 2025-06-02 18:06:20.188501 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-02 18:06:20.188511 | orchestrator | + echo 2025-06-02 18:06:20.188521 | orchestrator | + osism container testbed-manager images 2025-06-02 18:06:22.363085 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:06:22.363240 | orchestrator | registry.osism.tech/osism/osism-ansible latest da159755f949 About an hour ago 577MB 2025-06-02 18:06:22.363329 | orchestrator | registry.osism.tech/osism/osism latest ac1f7959a33a About an hour ago 297MB 2025-06-02 18:06:22.363345 | orchestrator | registry.osism.tech/osism/kolla-ansible 2024.2 8f1cf06d366b 5 hours ago 574MB 2025-06-02 18:06:22.363357 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e73e0506845d 15 hours ago 11.5MB 2025-06-02 18:06:22.363368 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 86ee4afc8387 15 hours ago 225MB 2025-06-02 18:06:22.363379 | orchestrator | registry.osism.tech/osism/cephclient reef 3d7d8b8bbba7 15 hours ago 454MB 2025-06-02 18:06:22.363391 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 17 hours ago 629MB 2025-06-02 18:06:22.363402 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 17 hours ago 319MB 2025-06-02 18:06:22.363413 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 17 hours ago 747MB 2025-06-02 18:06:22.363424 | orchestrator | registry.osism.tech/kolla/prometheus-alertmanager 2024.2 98f0ac7b228f 17 hours ago 457MB 2025-06-02 18:06:22.363435 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 17 hours ago 411MB 2025-06-02 18:06:22.363446 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 17 hours ago 359MB 2025-06-02 18:06:22.363457 | orchestrator | registry.osism.tech/kolla/prometheus-blackbox-exporter 2024.2 058fdfb821be 17 hours ago 361MB 2025-06-02 18:06:22.363467 | orchestrator | registry.osism.tech/kolla/prometheus-v2-server 2024.2 fef9d4ae652b 17 hours ago 892MB 2025-06-02 18:06:22.363478 | orchestrator | registry.osism.tech/osism/ceph-ansible reef b20110f9400d 18 hours ago 538MB 2025-06-02 18:06:22.363513 | orchestrator | registry.osism.tech/osism/osism-kubernetes latest 95f78bc350f5 18 hours ago 1.21GB 2025-06-02 18:06:22.363524 | orchestrator | registry.osism.tech/osism/inventory-reconciler latest 77eaadf2782f 18 hours ago 310MB 2025-06-02 18:06:22.363535 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 4 days ago 41.4MB 2025-06-02 18:06:22.363546 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 6 days ago 224MB 2025-06-02 18:06:22.363556 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 4815a3e162ea 3 months ago 328MB 2025-06-02 18:06:22.363567 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-02 18:06:22.363578 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-02 18:06:22.363589 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 11 months ago 146MB 2025-06-02 18:06:22.619050 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:06:22.619247 | orchestrator | ++ semver latest 5.0.0 2025-06-02 18:06:22.665668 | orchestrator | 2025-06-02 18:06:22.665779 | orchestrator | ## Containers @ testbed-node-0 2025-06-02 18:06:22.665795 | orchestrator | 2025-06-02 18:06:22.665846 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 18:06:22.665858 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 18:06:22.665870 | orchestrator | + echo 2025-06-02 18:06:22.665881 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-02 18:06:22.665894 | orchestrator | + echo 2025-06-02 18:06:22.665905 | orchestrator | + osism container testbed-node-0 ps 2025-06-02 18:06:24.842209 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:06:24.842289 | orchestrator | ae824070bebf registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:06:24.842297 | orchestrator | 54ca7c6ee062 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:06:24.842302 | orchestrator | 43eab6580301 registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:06:24.842306 | orchestrator | e13670fa8354 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:06:24.842310 | orchestrator | df3ebd9da1c4 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 18:06:24.842323 | orchestrator | f45564707176 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:06:24.842327 | orchestrator | 18b780424a3f registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-02 18:06:24.842331 | orchestrator | 9e87ed67ce1a registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-02 18:06:24.842335 | orchestrator | 6ea87f286f2f registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:06:24.842339 | orchestrator | b82098e5d9a6 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:06:24.842358 | orchestrator | 6f7dc0813210 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 18:06:24.842362 | orchestrator | a09558bf4048 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 18:06:24.842366 | orchestrator | 2464114b54f8 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 18:06:24.842370 | orchestrator | 6fd9fefa269d registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-02 18:06:24.842374 | orchestrator | 9a2d7cc85e66 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:06:24.842378 | orchestrator | 3ade55844636 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:06:24.842382 | orchestrator | 8f335a04564e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:06:24.842385 | orchestrator | 8ebc55c95974 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:06:24.842389 | orchestrator | 200e35f488e9 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 18:06:24.842393 | orchestrator | d38e68d1b864 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-02 18:06:24.842397 | orchestrator | efa23cb4594f registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-02 18:06:24.842411 | orchestrator | 66fd20c9a8a1 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:06:24.842415 | orchestrator | c5e06579be44 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:06:24.842419 | orchestrator | 456d07c50856 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-02 18:06:24.842426 | orchestrator | 9c21e3d12b44 registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 18:06:24.842431 | orchestrator | 7113eeee93bd registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:06:24.842434 | orchestrator | 6a9410c41962 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:06:24.842442 | orchestrator | 4a02f684e7a9 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:06:24.842446 | orchestrator | 4d54049bb6a8 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:06:24.842450 | orchestrator | 0e29ef666815 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_mysqld_exporter 2025-06-02 18:06:24.842457 | orchestrator | 36bfdfe93237 registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 18:06:24.842461 | orchestrator | a9c404640623 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-02 18:06:24.842465 | orchestrator | a0e0f9d65184 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 17 minutes ago Up 17 minutes (healthy) keystone 2025-06-02 18:06:24.842468 | orchestrator | 56333494ffc3 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 18:06:24.842472 | orchestrator | e5e0b22c9f88 registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 18:06:24.842476 | orchestrator | edb526b631a6 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 18:06:24.842480 | orchestrator | 74ae1b1e36bd registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 20 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 18:06:24.842484 | orchestrator | a24dab93371d registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-02 18:06:24.842487 | orchestrator | c046db7f5648 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 18:06:24.842491 | orchestrator | 720ce7fa9d93 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-02 18:06:24.842495 | orchestrator | 7ec6623b6de3 registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:06:24.842499 | orchestrator | ccbe924b86a3 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:06:24.842502 | orchestrator | cc4369795712 registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:06:24.842507 | orchestrator | b5c5d98109d3 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 18:06:24.842516 | orchestrator | 1c38d4c98cb1 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-02 18:06:24.842520 | orchestrator | f8c75c13bdf0 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-02 18:06:24.842524 | orchestrator | 6074fff2db2c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-02 18:06:24.842528 | orchestrator | aeb4d134e7b9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-06-02 18:06:24.842534 | orchestrator | fc8078f4555d registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-02 18:06:24.842538 | orchestrator | 79e34b36ca01 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:06:24.842545 | orchestrator | 4b2bd071cdc9 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 18:06:24.842549 | orchestrator | 74ef87b63baf registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:06:24.842552 | orchestrator | c088fce72878 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:06:24.842556 | orchestrator | 7b89d47b3d30 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:06:24.842560 | orchestrator | 981ab8ce2d37 registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-02 18:06:24.842563 | orchestrator | d449fce21f6a registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:06:24.842567 | orchestrator | f0bd6ca0865e registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-02 18:06:25.108094 | orchestrator | 2025-06-02 18:06:25.108183 | orchestrator | ## Images @ testbed-node-0 2025-06-02 18:06:25.108192 | orchestrator | 2025-06-02 18:06:25.108198 | orchestrator | + echo 2025-06-02 18:06:25.108203 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-02 18:06:25.108209 | orchestrator | + echo 2025-06-02 18:06:25.108214 | orchestrator | + osism container testbed-node-0 images 2025-06-02 18:06:27.161502 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:06:27.161580 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 15 hours ago 1.27GB 2025-06-02 18:06:27.161587 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 17 hours ago 319MB 2025-06-02 18:06:27.161592 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 17 hours ago 629MB 2025-06-02 18:06:27.161597 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 17 hours ago 376MB 2025-06-02 18:06:27.161602 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 17 hours ago 319MB 2025-06-02 18:06:27.161606 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 17 hours ago 327MB 2025-06-02 18:06:27.161611 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 17 hours ago 1.59GB 2025-06-02 18:06:27.161615 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 17 hours ago 1.01GB 2025-06-02 18:06:27.161635 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 17 hours ago 1.55GB 2025-06-02 18:06:27.161640 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 17 hours ago 330MB 2025-06-02 18:06:27.161645 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 17 hours ago 419MB 2025-06-02 18:06:27.161650 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 17 hours ago 747MB 2025-06-02 18:06:27.161654 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 17 hours ago 325MB 2025-06-02 18:06:27.161659 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 17 hours ago 326MB 2025-06-02 18:06:27.161664 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 17 hours ago 352MB 2025-06-02 18:06:27.161686 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 17 hours ago 411MB 2025-06-02 18:06:27.161691 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 17 hours ago 345MB 2025-06-02 18:06:27.161696 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 17 hours ago 359MB 2025-06-02 18:06:27.161700 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 17 hours ago 354MB 2025-06-02 18:06:27.161705 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 17 hours ago 362MB 2025-06-02 18:06:27.161709 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 17 hours ago 362MB 2025-06-02 18:06:27.161714 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 17 hours ago 591MB 2025-06-02 18:06:27.161718 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 17 hours ago 1.21GB 2025-06-02 18:06:27.161723 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 17 hours ago 947MB 2025-06-02 18:06:27.161727 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 17 hours ago 948MB 2025-06-02 18:06:27.161732 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 17 hours ago 948MB 2025-06-02 18:06:27.161737 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 17 hours ago 947MB 2025-06-02 18:06:27.161742 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 17 hours ago 1.41GB 2025-06-02 18:06:27.161746 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 17 hours ago 1.41GB 2025-06-02 18:06:27.161751 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 17 hours ago 1.29GB 2025-06-02 18:06:27.161755 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 17 hours ago 1.29GB 2025-06-02 18:06:27.161760 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 17 hours ago 1.3GB 2025-06-02 18:06:27.161764 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 17 hours ago 1.42GB 2025-06-02 18:06:27.161769 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 17 hours ago 1.15GB 2025-06-02 18:06:27.161773 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 17 hours ago 1.31GB 2025-06-02 18:06:27.161778 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 17 hours ago 1.2GB 2025-06-02 18:06:27.161831 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 17 hours ago 1.06GB 2025-06-02 18:06:27.161838 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 17 hours ago 1.06GB 2025-06-02 18:06:27.161843 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 17 hours ago 1.06GB 2025-06-02 18:06:27.161847 | orchestrator | registry.osism.tech/kolla/aodh-api 2024.2 8a6a3d63670d 17 hours ago 1.04GB 2025-06-02 18:06:27.161852 | orchestrator | registry.osism.tech/kolla/aodh-notifier 2024.2 d8bc8850fca0 17 hours ago 1.04GB 2025-06-02 18:06:27.161856 | orchestrator | registry.osism.tech/kolla/aodh-evaluator 2024.2 4e7db9d8828a 17 hours ago 1.04GB 2025-06-02 18:06:27.161861 | orchestrator | registry.osism.tech/kolla/aodh-listener 2024.2 6382990ff4a0 17 hours ago 1.04GB 2025-06-02 18:06:27.161865 | orchestrator | registry.osism.tech/kolla/skyline-apiserver 2024.2 da1a6531a58f 17 hours ago 1.11GB 2025-06-02 18:06:27.161870 | orchestrator | registry.osism.tech/kolla/skyline-console 2024.2 89a10b4f8d41 17 hours ago 1.12GB 2025-06-02 18:06:27.161879 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 17 hours ago 1.1GB 2025-06-02 18:06:27.161884 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 17 hours ago 1.1GB 2025-06-02 18:06:27.161888 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 17 hours ago 1.12GB 2025-06-02 18:06:27.161893 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 17 hours ago 1.1GB 2025-06-02 18:06:27.161897 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 17 hours ago 1.12GB 2025-06-02 18:06:27.161902 | orchestrator | registry.osism.tech/kolla/ceilometer-notification 2024.2 aed6aac6097b 17 hours ago 1.04GB 2025-06-02 18:06:27.161906 | orchestrator | registry.osism.tech/kolla/ceilometer-central 2024.2 def5173eaa7a 17 hours ago 1.04GB 2025-06-02 18:06:27.161910 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 17 hours ago 1.11GB 2025-06-02 18:06:27.161915 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 17 hours ago 1.11GB 2025-06-02 18:06:27.161920 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 17 hours ago 1.13GB 2025-06-02 18:06:27.161924 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 17 hours ago 1.04GB 2025-06-02 18:06:27.161929 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 17 hours ago 1.05GB 2025-06-02 18:06:27.161933 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 17 hours ago 1.05GB 2025-06-02 18:06:27.161938 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 17 hours ago 1.06GB 2025-06-02 18:06:27.161942 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 17 hours ago 1.05GB 2025-06-02 18:06:27.161947 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 17 hours ago 1.06GB 2025-06-02 18:06:27.161951 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 17 hours ago 1.05GB 2025-06-02 18:06:27.161956 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 17 hours ago 1.25GB 2025-06-02 18:06:27.402993 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:06:27.404126 | orchestrator | ++ semver latest 5.0.0 2025-06-02 18:06:27.454299 | orchestrator | 2025-06-02 18:06:27.454389 | orchestrator | ## Containers @ testbed-node-1 2025-06-02 18:06:27.454401 | orchestrator | 2025-06-02 18:06:27.454409 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 18:06:27.454418 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 18:06:27.454426 | orchestrator | + echo 2025-06-02 18:06:27.454434 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-02 18:06:27.454443 | orchestrator | + echo 2025-06-02 18:06:27.454451 | orchestrator | + osism container testbed-node-1 ps 2025-06-02 18:06:29.581653 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:06:29.690824 | orchestrator | a67a4ba6eef0 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:06:29.690911 | orchestrator | 0a924d6064dc registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:06:29.690925 | orchestrator | 7ec246dbecea registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:06:29.690938 | orchestrator | d7765a22a820 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:06:29.690976 | orchestrator | 64d061506581 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 18:06:29.690989 | orchestrator | 769d1b13ea1c registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 18:06:29.691000 | orchestrator | a2f7de664404 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:06:29.691022 | orchestrator | 89941bf57a73 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 7 minutes (healthy) magnum_api 2025-06-02 18:06:29.691033 | orchestrator | c1202ecde2c9 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:06:29.691044 | orchestrator | 61cb216f04aa registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:06:29.691055 | orchestrator | 7985b8d84b45 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 18:06:29.691066 | orchestrator | 2eb032d07e16 registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 18:06:29.691084 | orchestrator | e8a767b34982 registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-06-02 18:06:29.691103 | orchestrator | 6987d37b1e85 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_producer 2025-06-02 18:06:29.691128 | orchestrator | f56050e297a6 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:06:29.691148 | orchestrator | 36fcfdd82e1a registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:06:29.691167 | orchestrator | 845fdce7514e registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:06:29.691186 | orchestrator | 61370f0c6cac registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:06:29.691205 | orchestrator | 79e1a4093722 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) barbican_worker 2025-06-02 18:06:29.691218 | orchestrator | da1d131be219 registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-02 18:06:29.691248 | orchestrator | 2d21c82de837 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-02 18:06:29.691291 | orchestrator | 880ba40f2322 registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:06:29.691303 | orchestrator | 278599cea328 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:06:29.691314 | orchestrator | 5eb44b6584be registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-02 18:06:29.691335 | orchestrator | f57145a28b1f registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 18:06:29.691347 | orchestrator | 73bc2e551782 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:06:29.691358 | orchestrator | 3d2a378abb45 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:06:29.691376 | orchestrator | 482b4e8c4118 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:06:29.691392 | orchestrator | 3e015f7889b4 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:06:29.691410 | orchestrator | a1a9c03b8a9e registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-02 18:06:29.691428 | orchestrator | dd1247b5121b registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 18:06:29.691446 | orchestrator | 717dfa3afcf5 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-02 18:06:29.691465 | orchestrator | ca4e1b77cb92 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-02 18:06:29.691483 | orchestrator | 1156f377037d registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 18:06:29.691503 | orchestrator | b2874e713623 registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 18:06:29.691514 | orchestrator | 64fcd064129b registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 18:06:29.691525 | orchestrator | a20d2dfee629 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 18:06:29.691536 | orchestrator | 8493e34d2250 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-02 18:06:29.691547 | orchestrator | 918c6fc929a6 registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch 2025-06-02 18:06:29.691558 | orchestrator | eef01b1971a9 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-02 18:06:29.691569 | orchestrator | 483f3c199aff registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:06:29.691579 | orchestrator | 6f350d626659 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:06:29.691590 | orchestrator | 300b2745355d registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:06:29.691615 | orchestrator | e81d9971c2b7 registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 26 minutes ago Up 26 minutes ovn_northd 2025-06-02 18:06:29.691637 | orchestrator | 5cfca880c905 registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 18:06:29.691657 | orchestrator | 863a946aba1d registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 18:06:29.691676 | orchestrator | 9f47f56014ec registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 18:06:29.691694 | orchestrator | f4fca765fb3c registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 18:06:29.691713 | orchestrator | 983d9ba83449 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-02 18:06:29.691731 | orchestrator | 1ff19f2eaaea registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 28 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:06:29.691750 | orchestrator | a1097cf84ccf registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 18:06:29.691770 | orchestrator | 237c2e42e13a registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:06:29.691789 | orchestrator | 2b0152153b01 registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:06:29.691883 | orchestrator | 589f38b416b5 registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:06:29.691895 | orchestrator | 39deffbabbff registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-02 18:06:29.691912 | orchestrator | 148fa272780d registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:06:29.691931 | orchestrator | 574f310c7abd registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-02 18:06:29.829786 | orchestrator | 2025-06-02 18:06:29.829923 | orchestrator | ## Images @ testbed-node-1 2025-06-02 18:06:29.829937 | orchestrator | 2025-06-02 18:06:29.829949 | orchestrator | + echo 2025-06-02 18:06:29.829960 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-02 18:06:29.829972 | orchestrator | + echo 2025-06-02 18:06:29.829984 | orchestrator | + osism container testbed-node-1 images 2025-06-02 18:06:31.976396 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:06:31.976535 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 15 hours ago 1.27GB 2025-06-02 18:06:31.976562 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 17 hours ago 319MB 2025-06-02 18:06:31.976582 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 17 hours ago 629MB 2025-06-02 18:06:31.976602 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 17 hours ago 376MB 2025-06-02 18:06:31.976622 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 17 hours ago 319MB 2025-06-02 18:06:31.976667 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 17 hours ago 327MB 2025-06-02 18:06:31.976735 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 17 hours ago 1.01GB 2025-06-02 18:06:31.976755 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 17 hours ago 1.59GB 2025-06-02 18:06:31.976772 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 17 hours ago 1.55GB 2025-06-02 18:06:31.976848 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 17 hours ago 330MB 2025-06-02 18:06:31.976872 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 17 hours ago 419MB 2025-06-02 18:06:31.976891 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 17 hours ago 747MB 2025-06-02 18:06:31.976912 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 17 hours ago 325MB 2025-06-02 18:06:31.976934 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 17 hours ago 326MB 2025-06-02 18:06:31.976953 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 17 hours ago 352MB 2025-06-02 18:06:31.976971 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 17 hours ago 411MB 2025-06-02 18:06:31.976990 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 17 hours ago 345MB 2025-06-02 18:06:31.977008 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 17 hours ago 359MB 2025-06-02 18:06:31.977027 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 17 hours ago 354MB 2025-06-02 18:06:31.977069 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 17 hours ago 362MB 2025-06-02 18:06:31.977090 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 17 hours ago 362MB 2025-06-02 18:06:31.977108 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 17 hours ago 591MB 2025-06-02 18:06:31.977134 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 17 hours ago 1.21GB 2025-06-02 18:06:31.977154 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 17 hours ago 947MB 2025-06-02 18:06:31.977174 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 17 hours ago 948MB 2025-06-02 18:06:31.977193 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 17 hours ago 948MB 2025-06-02 18:06:31.977212 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 17 hours ago 947MB 2025-06-02 18:06:31.977231 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 17 hours ago 1.41GB 2025-06-02 18:06:31.977249 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 17 hours ago 1.41GB 2025-06-02 18:06:31.977267 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 17 hours ago 1.29GB 2025-06-02 18:06:31.977286 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 17 hours ago 1.29GB 2025-06-02 18:06:31.977305 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 17 hours ago 1.3GB 2025-06-02 18:06:31.977323 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 17 hours ago 1.42GB 2025-06-02 18:06:31.977340 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 17 hours ago 1.15GB 2025-06-02 18:06:31.977359 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 17 hours ago 1.31GB 2025-06-02 18:06:31.977389 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 17 hours ago 1.2GB 2025-06-02 18:06:31.977428 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 17 hours ago 1.06GB 2025-06-02 18:06:31.977445 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 17 hours ago 1.06GB 2025-06-02 18:06:31.977463 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 17 hours ago 1.06GB 2025-06-02 18:06:31.977481 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 17 hours ago 1.1GB 2025-06-02 18:06:31.977498 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 17 hours ago 1.1GB 2025-06-02 18:06:31.977516 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 17 hours ago 1.12GB 2025-06-02 18:06:31.977536 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 17 hours ago 1.1GB 2025-06-02 18:06:31.977555 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 17 hours ago 1.12GB 2025-06-02 18:06:31.977573 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 17 hours ago 1.11GB 2025-06-02 18:06:31.977591 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 17 hours ago 1.11GB 2025-06-02 18:06:31.977609 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 17 hours ago 1.13GB 2025-06-02 18:06:31.977626 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 17 hours ago 1.04GB 2025-06-02 18:06:31.977645 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 17 hours ago 1.05GB 2025-06-02 18:06:31.977664 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 17 hours ago 1.05GB 2025-06-02 18:06:31.977683 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 17 hours ago 1.06GB 2025-06-02 18:06:31.977704 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 17 hours ago 1.05GB 2025-06-02 18:06:31.977723 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 17 hours ago 1.06GB 2025-06-02 18:06:31.977742 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 17 hours ago 1.05GB 2025-06-02 18:06:31.977763 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 17 hours ago 1.25GB 2025-06-02 18:06:32.249370 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-02 18:06:32.249994 | orchestrator | ++ semver latest 5.0.0 2025-06-02 18:06:32.302632 | orchestrator | 2025-06-02 18:06:32.302823 | orchestrator | ## Containers @ testbed-node-2 2025-06-02 18:06:32.302842 | orchestrator | 2025-06-02 18:06:32.302854 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 18:06:32.302865 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 18:06:32.302876 | orchestrator | + echo 2025-06-02 18:06:32.302887 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-02 18:06:32.302899 | orchestrator | + echo 2025-06-02 18:06:32.302910 | orchestrator | + osism container testbed-node-2 ps 2025-06-02 18:06:34.494278 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-02 18:06:34.494378 | orchestrator | d6c556708747 registry.osism.tech/kolla/octavia-worker:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-06-02 18:06:34.494394 | orchestrator | e3f8b2e77999 registry.osism.tech/kolla/octavia-housekeeping:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-06-02 18:06:34.494405 | orchestrator | 76e17e001dac registry.osism.tech/kolla/octavia-health-manager:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-06-02 18:06:34.494437 | orchestrator | 92b6f2c6b788 registry.osism.tech/kolla/octavia-driver-agent:2024.2 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-06-02 18:06:34.494449 | orchestrator | b223ab0520a7 registry.osism.tech/kolla/octavia-api:2024.2 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-06-02 18:06:34.494475 | orchestrator | a94034d2b7b0 registry.osism.tech/kolla/grafana:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-06-02 18:06:34.494486 | orchestrator | d14722ecf3c2 registry.osism.tech/kolla/magnum-conductor:2024.2 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) magnum_conductor 2025-06-02 18:06:34.494496 | orchestrator | 561190eed513 registry.osism.tech/kolla/magnum-api:2024.2 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-06-02 18:06:34.494506 | orchestrator | ec65194fe741 registry.osism.tech/kolla/designate-worker:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_worker 2025-06-02 18:06:34.494515 | orchestrator | 93673c9ade03 registry.osism.tech/kolla/placement-api:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-06-02 18:06:34.494525 | orchestrator | 7c3b19c4cf55 registry.osism.tech/kolla/designate-mdns:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) designate_mdns 2025-06-02 18:06:34.494535 | orchestrator | 6422dd9d556d registry.osism.tech/kolla/nova-novncproxy:2024.2 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_novncproxy 2025-06-02 18:06:34.494545 | orchestrator | eb055ee715cf registry.osism.tech/kolla/neutron-server:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-06-02 18:06:34.494554 | orchestrator | 71082c90ccc4 registry.osism.tech/kolla/designate-producer:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-06-02 18:06:34.494563 | orchestrator | bf1c3372f111 registry.osism.tech/kolla/designate-central:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-06-02 18:06:34.494573 | orchestrator | d4ac37f2db51 registry.osism.tech/kolla/nova-conductor:2024.2 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-06-02 18:06:34.494583 | orchestrator | 9fca3fa49aad registry.osism.tech/kolla/designate-api:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_api 2025-06-02 18:06:34.494592 | orchestrator | a609bcf5cea4 registry.osism.tech/kolla/designate-backend-bind9:2024.2 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_backend_bind9 2025-06-02 18:06:34.494601 | orchestrator | 80bacc1986e9 registry.osism.tech/kolla/barbican-worker:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-06-02 18:06:34.494611 | orchestrator | 04efd8e5a86c registry.osism.tech/kolla/barbican-keystone-listener:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-06-02 18:06:34.494620 | orchestrator | ba5eacb814e4 registry.osism.tech/kolla/barbican-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-06-02 18:06:34.494645 | orchestrator | c368ee179c3a registry.osism.tech/kolla/nova-api:2024.2 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) nova_api 2025-06-02 18:06:34.494654 | orchestrator | 8891a699c659 registry.osism.tech/kolla/nova-scheduler:2024.2 "dumb-init --single-…" 12 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-02 18:06:34.494672 | orchestrator | aa54e9741dc1 registry.osism.tech/kolla/glance-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-06-02 18:06:34.494682 | orchestrator | c7f082d19c4d registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-06-02 18:06:34.494691 | orchestrator | 002dcd157544 registry.osism.tech/kolla/cinder-scheduler:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_scheduler 2025-06-02 18:06:34.495614 | orchestrator | b0eeb319e532 registry.osism.tech/kolla/cinder-api:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) cinder_api 2025-06-02 18:06:34.495703 | orchestrator | b584d3aa6122 registry.osism.tech/kolla/prometheus-cadvisor:2024.2 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-06-02 18:06:34.495729 | orchestrator | 3b053377a0e1 registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_memcached_exporter 2025-06-02 18:06:34.495751 | orchestrator | b2460d7a84d7 registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-06-02 18:06:34.495773 | orchestrator | 0952e031fa6c registry.osism.tech/kolla/prometheus-node-exporter:2024.2 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-06-02 18:06:34.495837 | orchestrator | 0ddc500dfd90 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-02 18:06:34.495852 | orchestrator | bd669ec08233 registry.osism.tech/kolla/keystone:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-06-02 18:06:34.495864 | orchestrator | 692ae4d3abb6 registry.osism.tech/kolla/keystone-fernet:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_fernet 2025-06-02 18:06:34.495875 | orchestrator | c393b259917c registry.osism.tech/kolla/horizon:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) horizon 2025-06-02 18:06:34.495885 | orchestrator | c227c3bca20d registry.osism.tech/kolla/keystone-ssh:2024.2 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone_ssh 2025-06-02 18:06:34.495901 | orchestrator | 2627b0c47482 registry.osism.tech/kolla/opensearch-dashboards:2024.2 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) opensearch_dashboards 2025-06-02 18:06:34.495919 | orchestrator | fdd0ac9f5bf9 registry.osism.tech/kolla/mariadb-server:2024.2 "dumb-init -- kolla_…" 21 minutes ago Up 20 minutes (healthy) mariadb 2025-06-02 18:06:34.495958 | orchestrator | e936ac51f19b registry.osism.tech/kolla/opensearch:2024.2 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-02 18:06:34.495977 | orchestrator | 96d07b1513bb registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-02 18:06:34.495996 | orchestrator | 350c4112678a registry.osism.tech/kolla/keepalived:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes keepalived 2025-06-02 18:06:34.496015 | orchestrator | c5915e436ef1 registry.osism.tech/kolla/proxysql:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) proxysql 2025-06-02 18:06:34.496065 | orchestrator | 849acf2b76cc registry.osism.tech/kolla/haproxy:2024.2 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) haproxy 2025-06-02 18:06:34.496086 | orchestrator | d4aace1a6e5e registry.osism.tech/kolla/ovn-northd:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-06-02 18:06:34.496103 | orchestrator | be94dab3137c registry.osism.tech/kolla/ovn-sb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_sb_db 2025-06-02 18:06:34.496122 | orchestrator | 3c4f41c43055 registry.osism.tech/kolla/ovn-nb-db-server:2024.2 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_nb_db 2025-06-02 18:06:34.496140 | orchestrator | f2578fc8a09c registry.osism.tech/kolla/ovn-controller:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_controller 2025-06-02 18:06:34.496151 | orchestrator | 9c8433246f5a registry.osism.tech/kolla/rabbitmq:2024.2 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) rabbitmq 2025-06-02 18:06:34.496162 | orchestrator | 734ce0e2fc26 registry.osism.tech/osism/ceph-daemon:reef "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-02 18:06:34.496191 | orchestrator | 63dbe7acda31 registry.osism.tech/kolla/openvswitch-vswitchd:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-02 18:06:34.496206 | orchestrator | 48fa50e4a5b1 registry.osism.tech/kolla/openvswitch-db-server:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-02 18:06:34.496218 | orchestrator | 30d274d88542 registry.osism.tech/kolla/redis-sentinel:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis_sentinel 2025-06-02 18:06:34.496231 | orchestrator | 96a3356741ad registry.osism.tech/kolla/redis:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) redis 2025-06-02 18:06:34.496244 | orchestrator | df373fa652df registry.osism.tech/kolla/memcached:2024.2 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) memcached 2025-06-02 18:06:34.496256 | orchestrator | 1879890c3a8d registry.osism.tech/kolla/cron:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-02 18:06:34.496270 | orchestrator | f76735a50981 registry.osism.tech/kolla/kolla-toolbox:2024.2 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-02 18:06:34.496283 | orchestrator | cabcfd778e2b registry.osism.tech/kolla/fluentd:2024.2 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-02 18:06:34.757613 | orchestrator | 2025-06-02 18:06:34.757716 | orchestrator | ## Images @ testbed-node-2 2025-06-02 18:06:34.757738 | orchestrator | 2025-06-02 18:06:34.757751 | orchestrator | + echo 2025-06-02 18:06:34.757761 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-02 18:06:34.757772 | orchestrator | + echo 2025-06-02 18:06:34.757782 | orchestrator | + osism container testbed-node-2 images 2025-06-02 18:06:36.918499 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-02 18:06:36.918603 | orchestrator | registry.osism.tech/osism/ceph-daemon reef 271b9d293e19 15 hours ago 1.27GB 2025-06-02 18:06:36.918615 | orchestrator | registry.osism.tech/kolla/memcached 2024.2 392808c41677 17 hours ago 319MB 2025-06-02 18:06:36.918623 | orchestrator | registry.osism.tech/kolla/fluentd 2024.2 d83e4c60464a 17 hours ago 629MB 2025-06-02 18:06:36.918631 | orchestrator | registry.osism.tech/kolla/rabbitmq 2024.2 83dfa36b0b09 17 hours ago 376MB 2025-06-02 18:06:36.918662 | orchestrator | registry.osism.tech/kolla/cron 2024.2 b5b108bf8b06 17 hours ago 319MB 2025-06-02 18:06:36.918670 | orchestrator | registry.osism.tech/kolla/haproxy 2024.2 9534d2527bd9 17 hours ago 327MB 2025-06-02 18:06:36.918753 | orchestrator | registry.osism.tech/kolla/opensearch 2024.2 88f1dfbac042 17 hours ago 1.59GB 2025-06-02 18:06:36.918770 | orchestrator | registry.osism.tech/kolla/grafana 2024.2 0f911db240a6 17 hours ago 1.01GB 2025-06-02 18:06:36.918783 | orchestrator | registry.osism.tech/kolla/opensearch-dashboards 2024.2 307c7b2e9629 17 hours ago 1.55GB 2025-06-02 18:06:36.918887 | orchestrator | registry.osism.tech/kolla/keepalived 2024.2 5b770fdbd519 17 hours ago 330MB 2025-06-02 18:06:36.918897 | orchestrator | registry.osism.tech/kolla/proxysql 2024.2 d0f7c25d3497 17 hours ago 419MB 2025-06-02 18:06:36.919930 | orchestrator | registry.osism.tech/kolla/kolla-toolbox 2024.2 d96ad4a06177 17 hours ago 747MB 2025-06-02 18:06:36.920033 | orchestrator | registry.osism.tech/kolla/redis-sentinel 2024.2 a4f9468e38ea 17 hours ago 325MB 2025-06-02 18:06:36.920058 | orchestrator | registry.osism.tech/kolla/redis 2024.2 4b29449821be 17 hours ago 326MB 2025-06-02 18:06:36.920078 | orchestrator | registry.osism.tech/kolla/prometheus-memcached-exporter 2024.2 75af3084c3d1 17 hours ago 352MB 2025-06-02 18:06:36.920097 | orchestrator | registry.osism.tech/kolla/prometheus-cadvisor 2024.2 b4411222e57e 17 hours ago 411MB 2025-06-02 18:06:36.920115 | orchestrator | registry.osism.tech/kolla/prometheus-elasticsearch-exporter 2024.2 db5ce49c89cc 17 hours ago 345MB 2025-06-02 18:06:36.920134 | orchestrator | registry.osism.tech/kolla/prometheus-mysqld-exporter 2024.2 03e0f3198b34 17 hours ago 354MB 2025-06-02 18:06:36.920153 | orchestrator | registry.osism.tech/kolla/prometheus-node-exporter 2024.2 5134a96e4dfe 17 hours ago 359MB 2025-06-02 18:06:36.920172 | orchestrator | registry.osism.tech/kolla/openvswitch-vswitchd 2024.2 8dfe63d220a5 17 hours ago 362MB 2025-06-02 18:06:36.920189 | orchestrator | registry.osism.tech/kolla/openvswitch-db-server 2024.2 34548ea593f0 17 hours ago 362MB 2025-06-02 18:06:36.920200 | orchestrator | registry.osism.tech/kolla/mariadb-server 2024.2 29ac703ff67c 17 hours ago 591MB 2025-06-02 18:06:36.920212 | orchestrator | registry.osism.tech/kolla/horizon 2024.2 fe51ac78c8f1 17 hours ago 1.21GB 2025-06-02 18:06:36.920227 | orchestrator | registry.osism.tech/kolla/ovn-nb-db-server 2024.2 c4655637af6e 17 hours ago 947MB 2025-06-02 18:06:36.920244 | orchestrator | registry.osism.tech/kolla/ovn-controller 2024.2 501bf0c10100 17 hours ago 948MB 2025-06-02 18:06:36.920263 | orchestrator | registry.osism.tech/kolla/ovn-northd 2024.2 bff812ef8262 17 hours ago 948MB 2025-06-02 18:06:36.920281 | orchestrator | registry.osism.tech/kolla/ovn-sb-db-server 2024.2 e6e013a1a722 17 hours ago 947MB 2025-06-02 18:06:36.920299 | orchestrator | registry.osism.tech/kolla/cinder-api 2024.2 23e5ad899301 17 hours ago 1.41GB 2025-06-02 18:06:36.920319 | orchestrator | registry.osism.tech/kolla/cinder-scheduler 2024.2 45b363b7482a 17 hours ago 1.41GB 2025-06-02 18:06:36.920360 | orchestrator | registry.osism.tech/kolla/nova-api 2024.2 760164fe4759 17 hours ago 1.29GB 2025-06-02 18:06:36.920381 | orchestrator | registry.osism.tech/kolla/nova-conductor 2024.2 f5741b323fe9 17 hours ago 1.29GB 2025-06-02 18:06:36.920401 | orchestrator | registry.osism.tech/kolla/nova-scheduler 2024.2 ef9c983c3ed3 17 hours ago 1.3GB 2025-06-02 18:06:36.920421 | orchestrator | registry.osism.tech/kolla/nova-novncproxy 2024.2 35396146c866 17 hours ago 1.42GB 2025-06-02 18:06:36.920442 | orchestrator | registry.osism.tech/kolla/glance-api 2024.2 70795d3e49ef 17 hours ago 1.15GB 2025-06-02 18:06:36.920473 | orchestrator | registry.osism.tech/kolla/magnum-conductor 2024.2 de33a20e612e 17 hours ago 1.31GB 2025-06-02 18:06:36.920486 | orchestrator | registry.osism.tech/kolla/magnum-api 2024.2 462af32e366a 17 hours ago 1.2GB 2025-06-02 18:06:36.920499 | orchestrator | registry.osism.tech/kolla/barbican-worker 2024.2 21905100e3ed 17 hours ago 1.06GB 2025-06-02 18:06:36.920512 | orchestrator | registry.osism.tech/kolla/barbican-api 2024.2 9c686edf4034 17 hours ago 1.06GB 2025-06-02 18:06:36.920524 | orchestrator | registry.osism.tech/kolla/barbican-keystone-listener 2024.2 e5000fc07327 17 hours ago 1.06GB 2025-06-02 18:06:36.920536 | orchestrator | registry.osism.tech/kolla/octavia-housekeeping 2024.2 db5d29469dee 17 hours ago 1.1GB 2025-06-02 18:06:36.920549 | orchestrator | registry.osism.tech/kolla/octavia-worker 2024.2 47facbd328df 17 hours ago 1.1GB 2025-06-02 18:06:36.920562 | orchestrator | registry.osism.tech/kolla/octavia-driver-agent 2024.2 a89f287066ef 17 hours ago 1.12GB 2025-06-02 18:06:36.920575 | orchestrator | registry.osism.tech/kolla/octavia-health-manager 2024.2 1f4bea213a07 17 hours ago 1.1GB 2025-06-02 18:06:36.920588 | orchestrator | registry.osism.tech/kolla/octavia-api 2024.2 97ff50a4c378 17 hours ago 1.12GB 2025-06-02 18:06:36.920601 | orchestrator | registry.osism.tech/kolla/keystone-ssh 2024.2 c4ed2f5a2192 17 hours ago 1.11GB 2025-06-02 18:06:36.920619 | orchestrator | registry.osism.tech/kolla/keystone-fernet 2024.2 ea224ddfbd63 17 hours ago 1.11GB 2025-06-02 18:06:36.920638 | orchestrator | registry.osism.tech/kolla/keystone 2024.2 68b4a4b40b7c 17 hours ago 1.13GB 2025-06-02 18:06:36.920657 | orchestrator | registry.osism.tech/kolla/placement-api 2024.2 8f7230e2e54a 17 hours ago 1.04GB 2025-06-02 18:06:36.920676 | orchestrator | registry.osism.tech/kolla/designate-producer 2024.2 3a64d65ac616 17 hours ago 1.05GB 2025-06-02 18:06:36.920715 | orchestrator | registry.osism.tech/kolla/designate-central 2024.2 c3e9f7a9a34d 17 hours ago 1.05GB 2025-06-02 18:06:36.920735 | orchestrator | registry.osism.tech/kolla/designate-worker 2024.2 99480384bf9d 17 hours ago 1.06GB 2025-06-02 18:06:36.920754 | orchestrator | registry.osism.tech/kolla/designate-api 2024.2 16d05b3fd708 17 hours ago 1.05GB 2025-06-02 18:06:36.920769 | orchestrator | registry.osism.tech/kolla/designate-backend-bind9 2024.2 5935e336ac71 17 hours ago 1.06GB 2025-06-02 18:06:36.920780 | orchestrator | registry.osism.tech/kolla/designate-mdns 2024.2 ad58c3a011c5 17 hours ago 1.05GB 2025-06-02 18:06:36.920823 | orchestrator | registry.osism.tech/kolla/neutron-server 2024.2 321a68afc007 17 hours ago 1.25GB 2025-06-02 18:06:37.182243 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-02 18:06:37.192552 | orchestrator | + set -e 2025-06-02 18:06:37.192641 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 18:06:37.194230 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 18:06:37.194287 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 18:06:37.194300 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 18:06:37.194311 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 18:06:37.194322 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 18:06:37.194335 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 18:06:37.194346 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 18:06:37.194357 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 18:06:37.194368 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 18:06:37.194378 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 18:06:37.194389 | orchestrator | ++ export ARA=false 2025-06-02 18:06:37.194400 | orchestrator | ++ ARA=false 2025-06-02 18:06:37.194411 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 18:06:37.194421 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 18:06:37.194526 | orchestrator | ++ export TEMPEST=false 2025-06-02 18:06:37.194538 | orchestrator | ++ TEMPEST=false 2025-06-02 18:06:37.194549 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 18:06:37.194560 | orchestrator | ++ IS_ZUUL=true 2025-06-02 18:06:37.194603 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 18:06:37.194623 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 18:06:37.194634 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 18:06:37.194645 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 18:06:37.194656 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 18:06:37.194666 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 18:06:37.194677 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 18:06:37.194688 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 18:06:37.194698 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 18:06:37.194709 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 18:06:37.194730 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-02 18:06:37.194741 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-02 18:06:37.202251 | orchestrator | + set -e 2025-06-02 18:06:37.202322 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:06:37.202336 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:06:37.202348 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:06:37.202359 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:06:37.202371 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:06:37.202382 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:06:37.203641 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:06:37.206616 | orchestrator | 2025-06-02 18:06:37.206690 | orchestrator | # Ceph status 2025-06-02 18:06:37.206705 | orchestrator | 2025-06-02 18:06:37.206717 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 18:06:37.206729 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 18:06:37.206740 | orchestrator | + echo 2025-06-02 18:06:37.206751 | orchestrator | + echo '# Ceph status' 2025-06-02 18:06:37.206762 | orchestrator | + echo 2025-06-02 18:06:37.206773 | orchestrator | + ceph -s 2025-06-02 18:06:37.798460 | orchestrator | cluster: 2025-06-02 18:06:37.798595 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-02 18:06:37.798625 | orchestrator | health: HEALTH_OK 2025-06-02 18:06:37.798645 | orchestrator | 2025-06-02 18:06:37.798665 | orchestrator | services: 2025-06-02 18:06:37.798686 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-02 18:06:37.798707 | orchestrator | mgr: testbed-node-0(active, since 15m), standbys: testbed-node-2, testbed-node-1 2025-06-02 18:06:37.798729 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-02 18:06:37.798751 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-06-02 18:06:37.798772 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-02 18:06:37.798907 | orchestrator | 2025-06-02 18:06:37.798934 | orchestrator | data: 2025-06-02 18:06:37.798954 | orchestrator | volumes: 1/1 healthy 2025-06-02 18:06:37.798975 | orchestrator | pools: 14 pools, 401 pgs 2025-06-02 18:06:37.798996 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-02 18:06:37.799017 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-02 18:06:37.799036 | orchestrator | pgs: 401 active+clean 2025-06-02 18:06:37.799054 | orchestrator | 2025-06-02 18:06:37.845708 | orchestrator | 2025-06-02 18:06:37.845842 | orchestrator | # Ceph versions 2025-06-02 18:06:37.845858 | orchestrator | 2025-06-02 18:06:37.845869 | orchestrator | + echo 2025-06-02 18:06:37.845879 | orchestrator | + echo '# Ceph versions' 2025-06-02 18:06:37.845889 | orchestrator | + echo 2025-06-02 18:06:37.845899 | orchestrator | + ceph versions 2025-06-02 18:06:38.458079 | orchestrator | { 2025-06-02 18:06:38.458202 | orchestrator | "mon": { 2025-06-02 18:06:38.458225 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:06:38.458243 | orchestrator | }, 2025-06-02 18:06:38.458254 | orchestrator | "mgr": { 2025-06-02 18:06:38.458263 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:06:38.458271 | orchestrator | }, 2025-06-02 18:06:38.458280 | orchestrator | "osd": { 2025-06-02 18:06:38.458289 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-02 18:06:38.458298 | orchestrator | }, 2025-06-02 18:06:38.458306 | orchestrator | "mds": { 2025-06-02 18:06:38.458315 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:06:38.458324 | orchestrator | }, 2025-06-02 18:06:38.458333 | orchestrator | "rgw": { 2025-06-02 18:06:38.458346 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-02 18:06:38.458392 | orchestrator | }, 2025-06-02 18:06:38.458407 | orchestrator | "overall": { 2025-06-02 18:06:38.458423 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-02 18:06:38.458438 | orchestrator | } 2025-06-02 18:06:38.458453 | orchestrator | } 2025-06-02 18:06:38.500233 | orchestrator | 2025-06-02 18:06:38.500312 | orchestrator | # Ceph OSD tree 2025-06-02 18:06:38.500322 | orchestrator | 2025-06-02 18:06:38.500330 | orchestrator | + echo 2025-06-02 18:06:38.500338 | orchestrator | + echo '# Ceph OSD tree' 2025-06-02 18:06:38.500346 | orchestrator | + echo 2025-06-02 18:06:38.500353 | orchestrator | + ceph osd df tree 2025-06-02 18:06:39.034394 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-02 18:06:39.034499 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-06-02 18:06:39.034511 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-02 18:06:39.034521 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.32 1.07 190 up osd.0 2025-06-02 18:06:39.034529 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.52 0.93 202 up osd.4 2025-06-02 18:06:39.034538 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-06-02 18:06:39.034547 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.1 GiB 1.0 GiB 1 KiB 70 MiB 19 GiB 5.51 0.93 195 up osd.2 2025-06-02 18:06:39.034555 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.2 GiB 1 KiB 74 MiB 19 GiB 6.32 1.07 195 up osd.5 2025-06-02 18:06:39.034563 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-02 18:06:39.034572 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.75 1.14 184 up osd.1 2025-06-02 18:06:39.034581 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 70 MiB 19 GiB 5.09 0.86 204 up osd.3 2025-06-02 18:06:39.034589 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-06-02 18:06:39.034597 | orchestrator | MIN/MAX VAR: 0.86/1.14 STDDEV: 0.58 2025-06-02 18:06:39.095447 | orchestrator | 2025-06-02 18:06:39.095572 | orchestrator | # Ceph monitor status 2025-06-02 18:06:39.095595 | orchestrator | 2025-06-02 18:06:39.095637 | orchestrator | + echo 2025-06-02 18:06:39.095655 | orchestrator | + echo '# Ceph monitor status' 2025-06-02 18:06:39.095673 | orchestrator | + echo 2025-06-02 18:06:39.095689 | orchestrator | + ceph mon stat 2025-06-02 18:06:39.741603 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-02 18:06:39.798087 | orchestrator | 2025-06-02 18:06:39.798272 | orchestrator | # Ceph quorum status 2025-06-02 18:06:39.798286 | orchestrator | 2025-06-02 18:06:39.798294 | orchestrator | + echo 2025-06-02 18:06:39.798302 | orchestrator | + echo '# Ceph quorum status' 2025-06-02 18:06:39.798310 | orchestrator | + echo 2025-06-02 18:06:39.798328 | orchestrator | + ceph quorum_status 2025-06-02 18:06:39.798645 | orchestrator | + jq 2025-06-02 18:06:40.567109 | orchestrator | { 2025-06-02 18:06:40.567207 | orchestrator | "election_epoch": 8, 2025-06-02 18:06:40.567221 | orchestrator | "quorum": [ 2025-06-02 18:06:40.567232 | orchestrator | 0, 2025-06-02 18:06:40.567242 | orchestrator | 1, 2025-06-02 18:06:40.567252 | orchestrator | 2 2025-06-02 18:06:40.567262 | orchestrator | ], 2025-06-02 18:06:40.567271 | orchestrator | "quorum_names": [ 2025-06-02 18:06:40.567281 | orchestrator | "testbed-node-0", 2025-06-02 18:06:40.567290 | orchestrator | "testbed-node-1", 2025-06-02 18:06:40.567300 | orchestrator | "testbed-node-2" 2025-06-02 18:06:40.567309 | orchestrator | ], 2025-06-02 18:06:40.567319 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-02 18:06:40.567357 | orchestrator | "quorum_age": 1695, 2025-06-02 18:06:40.567367 | orchestrator | "features": { 2025-06-02 18:06:40.567376 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-02 18:06:40.567386 | orchestrator | "quorum_mon": [ 2025-06-02 18:06:40.567395 | orchestrator | "kraken", 2025-06-02 18:06:40.567405 | orchestrator | "luminous", 2025-06-02 18:06:40.567498 | orchestrator | "mimic", 2025-06-02 18:06:40.567511 | orchestrator | "osdmap-prune", 2025-06-02 18:06:40.567521 | orchestrator | "nautilus", 2025-06-02 18:06:40.567530 | orchestrator | "octopus", 2025-06-02 18:06:40.567539 | orchestrator | "pacific", 2025-06-02 18:06:40.567549 | orchestrator | "elector-pinging", 2025-06-02 18:06:40.567559 | orchestrator | "quincy", 2025-06-02 18:06:40.567568 | orchestrator | "reef" 2025-06-02 18:06:40.567577 | orchestrator | ] 2025-06-02 18:06:40.567587 | orchestrator | }, 2025-06-02 18:06:40.567596 | orchestrator | "monmap": { 2025-06-02 18:06:40.567605 | orchestrator | "epoch": 1, 2025-06-02 18:06:40.567615 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-02 18:06:40.567625 | orchestrator | "modified": "2025-06-02T17:38:07.686120Z", 2025-06-02 18:06:40.567635 | orchestrator | "created": "2025-06-02T17:38:07.686120Z", 2025-06-02 18:06:40.567644 | orchestrator | "min_mon_release": 18, 2025-06-02 18:06:40.567654 | orchestrator | "min_mon_release_name": "reef", 2025-06-02 18:06:40.567664 | orchestrator | "election_strategy": 1, 2025-06-02 18:06:40.567675 | orchestrator | "disallowed_leaders: ": "", 2025-06-02 18:06:40.567687 | orchestrator | "stretch_mode": false, 2025-06-02 18:06:40.567698 | orchestrator | "tiebreaker_mon": "", 2025-06-02 18:06:40.567709 | orchestrator | "removed_ranks: ": "", 2025-06-02 18:06:40.567720 | orchestrator | "features": { 2025-06-02 18:06:40.567731 | orchestrator | "persistent": [ 2025-06-02 18:06:40.567741 | orchestrator | "kraken", 2025-06-02 18:06:40.567752 | orchestrator | "luminous", 2025-06-02 18:06:40.567763 | orchestrator | "mimic", 2025-06-02 18:06:40.567774 | orchestrator | "osdmap-prune", 2025-06-02 18:06:40.567810 | orchestrator | "nautilus", 2025-06-02 18:06:40.567827 | orchestrator | "octopus", 2025-06-02 18:06:40.567844 | orchestrator | "pacific", 2025-06-02 18:06:40.567859 | orchestrator | "elector-pinging", 2025-06-02 18:06:40.567877 | orchestrator | "quincy", 2025-06-02 18:06:40.567894 | orchestrator | "reef" 2025-06-02 18:06:40.567909 | orchestrator | ], 2025-06-02 18:06:40.567922 | orchestrator | "optional": [] 2025-06-02 18:06:40.567938 | orchestrator | }, 2025-06-02 18:06:40.567955 | orchestrator | "mons": [ 2025-06-02 18:06:40.567971 | orchestrator | { 2025-06-02 18:06:40.567987 | orchestrator | "rank": 0, 2025-06-02 18:06:40.568002 | orchestrator | "name": "testbed-node-0", 2025-06-02 18:06:40.568017 | orchestrator | "public_addrs": { 2025-06-02 18:06:40.568034 | orchestrator | "addrvec": [ 2025-06-02 18:06:40.568049 | orchestrator | { 2025-06-02 18:06:40.568066 | orchestrator | "type": "v2", 2025-06-02 18:06:40.568087 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-02 18:06:40.568104 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568115 | orchestrator | }, 2025-06-02 18:06:40.568126 | orchestrator | { 2025-06-02 18:06:40.568136 | orchestrator | "type": "v1", 2025-06-02 18:06:40.568147 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-02 18:06:40.568157 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568168 | orchestrator | } 2025-06-02 18:06:40.568179 | orchestrator | ] 2025-06-02 18:06:40.568189 | orchestrator | }, 2025-06-02 18:06:40.568200 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-02 18:06:40.568211 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-02 18:06:40.568221 | orchestrator | "priority": 0, 2025-06-02 18:06:40.568232 | orchestrator | "weight": 0, 2025-06-02 18:06:40.568242 | orchestrator | "crush_location": "{}" 2025-06-02 18:06:40.568253 | orchestrator | }, 2025-06-02 18:06:40.568263 | orchestrator | { 2025-06-02 18:06:40.568274 | orchestrator | "rank": 1, 2025-06-02 18:06:40.568285 | orchestrator | "name": "testbed-node-1", 2025-06-02 18:06:40.568295 | orchestrator | "public_addrs": { 2025-06-02 18:06:40.568306 | orchestrator | "addrvec": [ 2025-06-02 18:06:40.568316 | orchestrator | { 2025-06-02 18:06:40.568327 | orchestrator | "type": "v2", 2025-06-02 18:06:40.568338 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-02 18:06:40.568348 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568359 | orchestrator | }, 2025-06-02 18:06:40.568369 | orchestrator | { 2025-06-02 18:06:40.568380 | orchestrator | "type": "v1", 2025-06-02 18:06:40.568404 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-02 18:06:40.568415 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568425 | orchestrator | } 2025-06-02 18:06:40.568436 | orchestrator | ] 2025-06-02 18:06:40.568447 | orchestrator | }, 2025-06-02 18:06:40.568457 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-02 18:06:40.568468 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-02 18:06:40.568479 | orchestrator | "priority": 0, 2025-06-02 18:06:40.568528 | orchestrator | "weight": 0, 2025-06-02 18:06:40.568546 | orchestrator | "crush_location": "{}" 2025-06-02 18:06:40.568561 | orchestrator | }, 2025-06-02 18:06:40.568578 | orchestrator | { 2025-06-02 18:06:40.568596 | orchestrator | "rank": 2, 2025-06-02 18:06:40.568612 | orchestrator | "name": "testbed-node-2", 2025-06-02 18:06:40.568630 | orchestrator | "public_addrs": { 2025-06-02 18:06:40.568647 | orchestrator | "addrvec": [ 2025-06-02 18:06:40.568664 | orchestrator | { 2025-06-02 18:06:40.568684 | orchestrator | "type": "v2", 2025-06-02 18:06:40.568701 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-02 18:06:40.568721 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568738 | orchestrator | }, 2025-06-02 18:06:40.568756 | orchestrator | { 2025-06-02 18:06:40.568774 | orchestrator | "type": "v1", 2025-06-02 18:06:40.568867 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-02 18:06:40.568886 | orchestrator | "nonce": 0 2025-06-02 18:06:40.568904 | orchestrator | } 2025-06-02 18:06:40.568921 | orchestrator | ] 2025-06-02 18:06:40.568938 | orchestrator | }, 2025-06-02 18:06:40.568955 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-02 18:06:40.568971 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-02 18:06:40.568988 | orchestrator | "priority": 0, 2025-06-02 18:06:40.569005 | orchestrator | "weight": 0, 2025-06-02 18:06:40.569024 | orchestrator | "crush_location": "{}" 2025-06-02 18:06:40.569042 | orchestrator | } 2025-06-02 18:06:40.569059 | orchestrator | ] 2025-06-02 18:06:40.569078 | orchestrator | } 2025-06-02 18:06:40.569096 | orchestrator | } 2025-06-02 18:06:40.569115 | orchestrator | 2025-06-02 18:06:40.569134 | orchestrator | # Ceph free space status 2025-06-02 18:06:40.569152 | orchestrator | 2025-06-02 18:06:40.569170 | orchestrator | + echo 2025-06-02 18:06:40.569188 | orchestrator | + echo '# Ceph free space status' 2025-06-02 18:06:40.569206 | orchestrator | + echo 2025-06-02 18:06:40.569224 | orchestrator | + ceph df 2025-06-02 18:06:41.155907 | orchestrator | --- RAW STORAGE --- 2025-06-02 18:06:41.156004 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-02 18:06:41.156025 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 18:06:41.156031 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-02 18:06:41.156038 | orchestrator | 2025-06-02 18:06:41.156045 | orchestrator | --- POOLS --- 2025-06-02 18:06:41.156052 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-02 18:06:41.156060 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 53 GiB 2025-06-02 18:06:41.156077 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:06:41.156084 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-02 18:06:41.156090 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:06:41.156096 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:06:41.156102 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-02 18:06:41.156108 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-02 18:06:41.156114 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-02 18:06:41.156120 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 53 GiB 2025-06-02 18:06:41.156126 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:06:41.156133 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:06:41.156139 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.92 35 GiB 2025-06-02 18:06:41.156145 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:06:41.156170 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-02 18:06:41.200630 | orchestrator | ++ semver latest 5.0.0 2025-06-02 18:06:41.241772 | orchestrator | + [[ -1 -eq -1 ]] 2025-06-02 18:06:41.241873 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-06-02 18:06:41.241880 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-02 18:06:41.241886 | orchestrator | + osism apply facts 2025-06-02 18:06:44.216700 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:06:44.216924 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:06:44.216957 | orchestrator | Registering Redlock._release_script 2025-06-02 18:06:44.280291 | orchestrator | 2025-06-02 18:06:44 | INFO  | Task a46a4d4f-1484-4aff-97e2-756e63b6cc68 (facts) was prepared for execution. 2025-06-02 18:06:44.280387 | orchestrator | 2025-06-02 18:06:44 | INFO  | It takes a moment until task a46a4d4f-1484-4aff-97e2-756e63b6cc68 (facts) has been started and output is visible here. 2025-06-02 18:06:48.818881 | orchestrator | 2025-06-02 18:06:48.818989 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-02 18:06:48.820140 | orchestrator | 2025-06-02 18:06:48.822249 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-02 18:06:48.822305 | orchestrator | Monday 02 June 2025 18:06:48 +0000 (0:00:00.292) 0:00:00.292 *********** 2025-06-02 18:06:50.450883 | orchestrator | ok: [testbed-manager] 2025-06-02 18:06:50.451452 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:50.451932 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:50.455199 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:50.455818 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:50.456828 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:50.457995 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:50.459059 | orchestrator | 2025-06-02 18:06:50.460660 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-02 18:06:50.462704 | orchestrator | Monday 02 June 2025 18:06:50 +0000 (0:00:01.636) 0:00:01.929 *********** 2025-06-02 18:06:50.706399 | orchestrator | skipping: [testbed-manager] 2025-06-02 18:06:50.823279 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:50.925834 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:06:51.004409 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:06:51.086994 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:51.913205 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:51.915578 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:51.916261 | orchestrator | 2025-06-02 18:06:51.918526 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-02 18:06:51.919640 | orchestrator | 2025-06-02 18:06:51.921488 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-02 18:06:51.922209 | orchestrator | Monday 02 June 2025 18:06:51 +0000 (0:00:01.464) 0:00:03.394 *********** 2025-06-02 18:06:57.404065 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:06:57.404196 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:06:57.405298 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:06:57.406370 | orchestrator | ok: [testbed-manager] 2025-06-02 18:06:57.406413 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:06:57.406994 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:06:57.411180 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:06:57.412099 | orchestrator | 2025-06-02 18:06:57.413352 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-02 18:06:57.414931 | orchestrator | 2025-06-02 18:06:57.416276 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-02 18:06:57.417212 | orchestrator | Monday 02 June 2025 18:06:57 +0000 (0:00:05.494) 0:00:08.888 *********** 2025-06-02 18:06:57.588565 | orchestrator | skipping: [testbed-manager] 2025-06-02 18:06:57.687146 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:06:57.775077 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:06:57.864726 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:06:57.951640 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:06:58.000526 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:06:58.001548 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:06:58.011696 | orchestrator | 2025-06-02 18:06:58.011759 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:06:58.011812 | orchestrator | 2025-06-02 18:06:58 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:06:58.011825 | orchestrator | 2025-06-02 18:06:58 | INFO  | Please wait and do not abort execution. 2025-06-02 18:06:58.012343 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.014251 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.019413 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.019470 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.020760 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.020854 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.021321 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:06:58.022374 | orchestrator | 2025-06-02 18:06:58.022457 | orchestrator | 2025-06-02 18:06:58.023355 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:06:58.023384 | orchestrator | Monday 02 June 2025 18:06:57 +0000 (0:00:00.595) 0:00:09.483 *********** 2025-06-02 18:06:58.024639 | orchestrator | =============================================================================== 2025-06-02 18:06:58.024726 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.49s 2025-06-02 18:06:58.025028 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.64s 2025-06-02 18:06:58.025492 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.46s 2025-06-02 18:06:58.025708 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-06-02 18:06:58.886219 | orchestrator | + osism validate ceph-mons 2025-06-02 18:07:00.732901 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:07:00.733012 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:07:00.733028 | orchestrator | Registering Redlock._release_script 2025-06-02 18:07:22.339183 | orchestrator | 2025-06-02 18:07:22.339276 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-02 18:07:22.339288 | orchestrator | 2025-06-02 18:07:22.339295 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:07:22.339302 | orchestrator | Monday 02 June 2025 18:07:05 +0000 (0:00:00.478) 0:00:00.478 *********** 2025-06-02 18:07:22.339309 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.339316 | orchestrator | 2025-06-02 18:07:22.339322 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:07:22.339328 | orchestrator | Monday 02 June 2025 18:07:06 +0000 (0:00:00.673) 0:00:01.151 *********** 2025-06-02 18:07:22.339335 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.339341 | orchestrator | 2025-06-02 18:07:22.339347 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:07:22.339353 | orchestrator | Monday 02 June 2025 18:07:07 +0000 (0:00:00.889) 0:00:02.041 *********** 2025-06-02 18:07:22.339359 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339366 | orchestrator | 2025-06-02 18:07:22.339372 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 18:07:22.339396 | orchestrator | Monday 02 June 2025 18:07:07 +0000 (0:00:00.279) 0:00:02.321 *********** 2025-06-02 18:07:22.339403 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339409 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:22.339415 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:22.339421 | orchestrator | 2025-06-02 18:07:22.339428 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 18:07:22.339434 | orchestrator | Monday 02 June 2025 18:07:07 +0000 (0:00:00.332) 0:00:02.653 *********** 2025-06-02 18:07:22.339440 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339446 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:22.339452 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:22.339458 | orchestrator | 2025-06-02 18:07:22.339464 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 18:07:22.339470 | orchestrator | Monday 02 June 2025 18:07:08 +0000 (0:00:01.082) 0:00:03.735 *********** 2025-06-02 18:07:22.339476 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339482 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:22.339489 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:22.339495 | orchestrator | 2025-06-02 18:07:22.339501 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 18:07:22.339507 | orchestrator | Monday 02 June 2025 18:07:08 +0000 (0:00:00.299) 0:00:04.035 *********** 2025-06-02 18:07:22.339513 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339519 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:22.339525 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:22.339531 | orchestrator | 2025-06-02 18:07:22.339537 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:07:22.339543 | orchestrator | Monday 02 June 2025 18:07:09 +0000 (0:00:00.636) 0:00:04.672 *********** 2025-06-02 18:07:22.339549 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339555 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:22.339561 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:22.339567 | orchestrator | 2025-06-02 18:07:22.339573 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-02 18:07:22.339579 | orchestrator | Monday 02 June 2025 18:07:09 +0000 (0:00:00.322) 0:00:04.994 *********** 2025-06-02 18:07:22.339586 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339592 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:22.339598 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:22.339604 | orchestrator | 2025-06-02 18:07:22.339610 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-02 18:07:22.339616 | orchestrator | Monday 02 June 2025 18:07:10 +0000 (0:00:00.378) 0:00:05.373 *********** 2025-06-02 18:07:22.339622 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339628 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:22.339636 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:22.339646 | orchestrator | 2025-06-02 18:07:22.339656 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:07:22.339682 | orchestrator | Monday 02 June 2025 18:07:10 +0000 (0:00:00.347) 0:00:05.721 *********** 2025-06-02 18:07:22.339693 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339704 | orchestrator | 2025-06-02 18:07:22.339711 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:07:22.339717 | orchestrator | Monday 02 June 2025 18:07:11 +0000 (0:00:00.876) 0:00:06.597 *********** 2025-06-02 18:07:22.339723 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339729 | orchestrator | 2025-06-02 18:07:22.339735 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:07:22.339742 | orchestrator | Monday 02 June 2025 18:07:11 +0000 (0:00:00.292) 0:00:06.890 *********** 2025-06-02 18:07:22.339748 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339785 | orchestrator | 2025-06-02 18:07:22.339791 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:22.339798 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.268) 0:00:07.159 *********** 2025-06-02 18:07:22.339810 | orchestrator | 2025-06-02 18:07:22.339817 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:22.339823 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.113) 0:00:07.272 *********** 2025-06-02 18:07:22.339829 | orchestrator | 2025-06-02 18:07:22.339835 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:22.339841 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.083) 0:00:07.356 *********** 2025-06-02 18:07:22.339847 | orchestrator | 2025-06-02 18:07:22.339853 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:07:22.339859 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.081) 0:00:07.437 *********** 2025-06-02 18:07:22.339865 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339871 | orchestrator | 2025-06-02 18:07:22.339877 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 18:07:22.339883 | orchestrator | Monday 02 June 2025 18:07:12 +0000 (0:00:00.280) 0:00:07.718 *********** 2025-06-02 18:07:22.339890 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.339896 | orchestrator | 2025-06-02 18:07:22.339916 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-02 18:07:22.339923 | orchestrator | Monday 02 June 2025 18:07:13 +0000 (0:00:00.366) 0:00:08.085 *********** 2025-06-02 18:07:22.339929 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339935 | orchestrator | 2025-06-02 18:07:22.339941 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-02 18:07:22.339947 | orchestrator | Monday 02 June 2025 18:07:13 +0000 (0:00:00.128) 0:00:08.213 *********** 2025-06-02 18:07:22.339953 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:07:22.339959 | orchestrator | 2025-06-02 18:07:22.339965 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-02 18:07:22.339972 | orchestrator | Monday 02 June 2025 18:07:15 +0000 (0:00:01.887) 0:00:10.100 *********** 2025-06-02 18:07:22.339978 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.339984 | orchestrator | 2025-06-02 18:07:22.339990 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-02 18:07:22.339996 | orchestrator | Monday 02 June 2025 18:07:15 +0000 (0:00:00.346) 0:00:10.446 *********** 2025-06-02 18:07:22.340002 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340008 | orchestrator | 2025-06-02 18:07:22.340014 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-02 18:07:22.340020 | orchestrator | Monday 02 June 2025 18:07:15 +0000 (0:00:00.326) 0:00:10.773 *********** 2025-06-02 18:07:22.340026 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340032 | orchestrator | 2025-06-02 18:07:22.340038 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-02 18:07:22.340045 | orchestrator | Monday 02 June 2025 18:07:16 +0000 (0:00:00.330) 0:00:11.103 *********** 2025-06-02 18:07:22.340051 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340057 | orchestrator | 2025-06-02 18:07:22.340063 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-02 18:07:22.340069 | orchestrator | Monday 02 June 2025 18:07:16 +0000 (0:00:00.315) 0:00:11.419 *********** 2025-06-02 18:07:22.340075 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340081 | orchestrator | 2025-06-02 18:07:22.340087 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-02 18:07:22.340093 | orchestrator | Monday 02 June 2025 18:07:16 +0000 (0:00:00.119) 0:00:11.538 *********** 2025-06-02 18:07:22.340099 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340105 | orchestrator | 2025-06-02 18:07:22.340111 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-02 18:07:22.340117 | orchestrator | Monday 02 June 2025 18:07:16 +0000 (0:00:00.138) 0:00:11.677 *********** 2025-06-02 18:07:22.340123 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340129 | orchestrator | 2025-06-02 18:07:22.340135 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-02 18:07:22.340146 | orchestrator | Monday 02 June 2025 18:07:16 +0000 (0:00:00.147) 0:00:11.824 *********** 2025-06-02 18:07:22.340152 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:07:22.340158 | orchestrator | 2025-06-02 18:07:22.340164 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-02 18:07:22.340171 | orchestrator | Monday 02 June 2025 18:07:18 +0000 (0:00:01.338) 0:00:13.163 *********** 2025-06-02 18:07:22.340177 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340183 | orchestrator | 2025-06-02 18:07:22.340193 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-02 18:07:22.340203 | orchestrator | Monday 02 June 2025 18:07:18 +0000 (0:00:00.293) 0:00:13.457 *********** 2025-06-02 18:07:22.340214 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340224 | orchestrator | 2025-06-02 18:07:22.340234 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-02 18:07:22.340243 | orchestrator | Monday 02 June 2025 18:07:18 +0000 (0:00:00.140) 0:00:13.597 *********** 2025-06-02 18:07:22.340250 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:22.340259 | orchestrator | 2025-06-02 18:07:22.340269 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-02 18:07:22.340280 | orchestrator | Monday 02 June 2025 18:07:18 +0000 (0:00:00.152) 0:00:13.750 *********** 2025-06-02 18:07:22.340290 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340300 | orchestrator | 2025-06-02 18:07:22.340307 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-02 18:07:22.340313 | orchestrator | Monday 02 June 2025 18:07:18 +0000 (0:00:00.144) 0:00:13.895 *********** 2025-06-02 18:07:22.340319 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340325 | orchestrator | 2025-06-02 18:07:22.340332 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:07:22.340342 | orchestrator | Monday 02 June 2025 18:07:19 +0000 (0:00:00.343) 0:00:14.238 *********** 2025-06-02 18:07:22.340351 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.340360 | orchestrator | 2025-06-02 18:07:22.340370 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:07:22.340380 | orchestrator | Monday 02 June 2025 18:07:19 +0000 (0:00:00.260) 0:00:14.498 *********** 2025-06-02 18:07:22.340391 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:22.340401 | orchestrator | 2025-06-02 18:07:22.340421 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:07:22.340430 | orchestrator | Monday 02 June 2025 18:07:19 +0000 (0:00:00.253) 0:00:14.752 *********** 2025-06-02 18:07:22.340441 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.340452 | orchestrator | 2025-06-02 18:07:22.340463 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:07:22.340473 | orchestrator | Monday 02 June 2025 18:07:21 +0000 (0:00:01.826) 0:00:16.578 *********** 2025-06-02 18:07:22.340486 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.340492 | orchestrator | 2025-06-02 18:07:22.340499 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:07:22.340505 | orchestrator | Monday 02 June 2025 18:07:21 +0000 (0:00:00.297) 0:00:16.876 *********** 2025-06-02 18:07:22.340511 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:22.340517 | orchestrator | 2025-06-02 18:07:22.340529 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:24.877301 | orchestrator | Monday 02 June 2025 18:07:22 +0000 (0:00:00.257) 0:00:17.134 *********** 2025-06-02 18:07:24.877411 | orchestrator | 2025-06-02 18:07:24.877426 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:24.877438 | orchestrator | Monday 02 June 2025 18:07:22 +0000 (0:00:00.071) 0:00:17.205 *********** 2025-06-02 18:07:24.877449 | orchestrator | 2025-06-02 18:07:24.877460 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:24.877497 | orchestrator | Monday 02 June 2025 18:07:22 +0000 (0:00:00.082) 0:00:17.288 *********** 2025-06-02 18:07:24.877508 | orchestrator | 2025-06-02 18:07:24.877519 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:07:24.877530 | orchestrator | Monday 02 June 2025 18:07:22 +0000 (0:00:00.078) 0:00:17.366 *********** 2025-06-02 18:07:24.877542 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:24.877552 | orchestrator | 2025-06-02 18:07:24.877563 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:07:24.877590 | orchestrator | Monday 02 June 2025 18:07:23 +0000 (0:00:01.595) 0:00:18.962 *********** 2025-06-02 18:07:24.877602 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:07:24.877613 | orchestrator |  "msg": [ 2025-06-02 18:07:24.877625 | orchestrator |  "Validator run completed.", 2025-06-02 18:07:24.877636 | orchestrator |  "You can find the report file here:", 2025-06-02 18:07:24.877647 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-02T18:07:05+00:00-report.json", 2025-06-02 18:07:24.877658 | orchestrator |  "on the following host:", 2025-06-02 18:07:24.877669 | orchestrator |  "testbed-manager" 2025-06-02 18:07:24.877680 | orchestrator |  ] 2025-06-02 18:07:24.877691 | orchestrator | } 2025-06-02 18:07:24.877702 | orchestrator | 2025-06-02 18:07:24.877713 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:07:24.877725 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-02 18:07:24.877737 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:07:24.877869 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:07:24.877977 | orchestrator | 2025-06-02 18:07:24.877994 | orchestrator | 2025-06-02 18:07:24.878007 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:07:24.878116 | orchestrator | Monday 02 June 2025 18:07:24 +0000 (0:00:00.623) 0:00:19.585 *********** 2025-06-02 18:07:24.878129 | orchestrator | =============================================================================== 2025-06-02 18:07:24.878140 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.89s 2025-06-02 18:07:24.878151 | orchestrator | Aggregate test results step one ----------------------------------------- 1.83s 2025-06-02 18:07:24.878162 | orchestrator | Write report file ------------------------------------------------------- 1.60s 2025-06-02 18:07:24.878172 | orchestrator | Gather status data ------------------------------------------------------ 1.34s 2025-06-02 18:07:24.878183 | orchestrator | Get container info ------------------------------------------------------ 1.08s 2025-06-02 18:07:24.878194 | orchestrator | Create report output directory ------------------------------------------ 0.89s 2025-06-02 18:07:24.878242 | orchestrator | Aggregate test results step one ----------------------------------------- 0.88s 2025-06-02 18:07:24.878301 | orchestrator | Get timestamp for report file ------------------------------------------- 0.67s 2025-06-02 18:07:24.878313 | orchestrator | Set test result to passed if container is existing ---------------------- 0.64s 2025-06-02 18:07:24.878324 | orchestrator | Print report file information ------------------------------------------- 0.62s 2025-06-02 18:07:24.878335 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.38s 2025-06-02 18:07:24.878346 | orchestrator | Fail due to missing containers ------------------------------------------ 0.37s 2025-06-02 18:07:24.878357 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.35s 2025-06-02 18:07:24.878367 | orchestrator | Set quorum test data ---------------------------------------------------- 0.35s 2025-06-02 18:07:24.878378 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2025-06-02 18:07:24.878400 | orchestrator | Prepare test data for container existance test -------------------------- 0.33s 2025-06-02 18:07:24.878411 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-06-02 18:07:24.878422 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-06-02 18:07:24.878432 | orchestrator | Prepare test data ------------------------------------------------------- 0.32s 2025-06-02 18:07:24.878472 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.32s 2025-06-02 18:07:25.156915 | orchestrator | + osism validate ceph-mgrs 2025-06-02 18:07:26.890228 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:07:26.890323 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:07:26.890336 | orchestrator | Registering Redlock._release_script 2025-06-02 18:07:46.687955 | orchestrator | 2025-06-02 18:07:46.688080 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-02 18:07:46.688094 | orchestrator | 2025-06-02 18:07:46.688103 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:07:46.688112 | orchestrator | Monday 02 June 2025 18:07:31 +0000 (0:00:00.464) 0:00:00.464 *********** 2025-06-02 18:07:46.688121 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.688130 | orchestrator | 2025-06-02 18:07:46.688138 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:07:46.688146 | orchestrator | Monday 02 June 2025 18:07:32 +0000 (0:00:00.657) 0:00:01.122 *********** 2025-06-02 18:07:46.688188 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.688197 | orchestrator | 2025-06-02 18:07:46.688206 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:07:46.688214 | orchestrator | Monday 02 June 2025 18:07:32 +0000 (0:00:00.855) 0:00:01.978 *********** 2025-06-02 18:07:46.688222 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688231 | orchestrator | 2025-06-02 18:07:46.688240 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-02 18:07:46.688249 | orchestrator | Monday 02 June 2025 18:07:33 +0000 (0:00:00.258) 0:00:02.237 *********** 2025-06-02 18:07:46.688256 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688264 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:46.688272 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:46.688280 | orchestrator | 2025-06-02 18:07:46.688288 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-02 18:07:46.688296 | orchestrator | Monday 02 June 2025 18:07:33 +0000 (0:00:00.301) 0:00:02.538 *********** 2025-06-02 18:07:46.688304 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:46.688311 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688319 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:46.688327 | orchestrator | 2025-06-02 18:07:46.688335 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-02 18:07:46.688343 | orchestrator | Monday 02 June 2025 18:07:34 +0000 (0:00:00.979) 0:00:03.518 *********** 2025-06-02 18:07:46.688351 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688359 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:46.688367 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:46.688375 | orchestrator | 2025-06-02 18:07:46.688382 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-02 18:07:46.688390 | orchestrator | Monday 02 June 2025 18:07:34 +0000 (0:00:00.311) 0:00:03.829 *********** 2025-06-02 18:07:46.688398 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688406 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:46.688414 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:46.688421 | orchestrator | 2025-06-02 18:07:46.688429 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:07:46.688437 | orchestrator | Monday 02 June 2025 18:07:35 +0000 (0:00:00.529) 0:00:04.359 *********** 2025-06-02 18:07:46.688445 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688453 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:46.688478 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:46.688487 | orchestrator | 2025-06-02 18:07:46.688494 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-02 18:07:46.688502 | orchestrator | Monday 02 June 2025 18:07:35 +0000 (0:00:00.407) 0:00:04.766 *********** 2025-06-02 18:07:46.688510 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688518 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:07:46.688526 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:07:46.688533 | orchestrator | 2025-06-02 18:07:46.688541 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-02 18:07:46.688549 | orchestrator | Monday 02 June 2025 18:07:35 +0000 (0:00:00.294) 0:00:05.061 *********** 2025-06-02 18:07:46.688556 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688564 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:07:46.688572 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:07:46.688579 | orchestrator | 2025-06-02 18:07:46.688587 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:07:46.688595 | orchestrator | Monday 02 June 2025 18:07:36 +0000 (0:00:00.322) 0:00:05.383 *********** 2025-06-02 18:07:46.688603 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688610 | orchestrator | 2025-06-02 18:07:46.688618 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:07:46.688626 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.722) 0:00:06.106 *********** 2025-06-02 18:07:46.688645 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688653 | orchestrator | 2025-06-02 18:07:46.688661 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:07:46.688669 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.267) 0:00:06.374 *********** 2025-06-02 18:07:46.688677 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688685 | orchestrator | 2025-06-02 18:07:46.688693 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.688701 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.257) 0:00:06.631 *********** 2025-06-02 18:07:46.688708 | orchestrator | 2025-06-02 18:07:46.688716 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.688724 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.071) 0:00:06.702 *********** 2025-06-02 18:07:46.688750 | orchestrator | 2025-06-02 18:07:46.688764 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.688778 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.070) 0:00:06.773 *********** 2025-06-02 18:07:46.688791 | orchestrator | 2025-06-02 18:07:46.688805 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:07:46.688818 | orchestrator | Monday 02 June 2025 18:07:37 +0000 (0:00:00.075) 0:00:06.848 *********** 2025-06-02 18:07:46.688830 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688838 | orchestrator | 2025-06-02 18:07:46.688846 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-02 18:07:46.688854 | orchestrator | Monday 02 June 2025 18:07:38 +0000 (0:00:00.286) 0:00:07.134 *********** 2025-06-02 18:07:46.688862 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.688871 | orchestrator | 2025-06-02 18:07:46.688893 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-02 18:07:46.688902 | orchestrator | Monday 02 June 2025 18:07:38 +0000 (0:00:00.247) 0:00:07.382 *********** 2025-06-02 18:07:46.688910 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688918 | orchestrator | 2025-06-02 18:07:46.688925 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-02 18:07:46.688933 | orchestrator | Monday 02 June 2025 18:07:38 +0000 (0:00:00.146) 0:00:07.529 *********** 2025-06-02 18:07:46.688941 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:07:46.688949 | orchestrator | 2025-06-02 18:07:46.688956 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-02 18:07:46.688964 | orchestrator | Monday 02 June 2025 18:07:40 +0000 (0:00:01.977) 0:00:09.506 *********** 2025-06-02 18:07:46.688980 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.688987 | orchestrator | 2025-06-02 18:07:46.688995 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-02 18:07:46.689003 | orchestrator | Monday 02 June 2025 18:07:40 +0000 (0:00:00.280) 0:00:09.787 *********** 2025-06-02 18:07:46.689011 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.689018 | orchestrator | 2025-06-02 18:07:46.689026 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-02 18:07:46.689034 | orchestrator | Monday 02 June 2025 18:07:41 +0000 (0:00:00.826) 0:00:10.613 *********** 2025-06-02 18:07:46.689041 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.689049 | orchestrator | 2025-06-02 18:07:46.689057 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-02 18:07:46.689065 | orchestrator | Monday 02 June 2025 18:07:41 +0000 (0:00:00.148) 0:00:10.762 *********** 2025-06-02 18:07:46.689072 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:07:46.689080 | orchestrator | 2025-06-02 18:07:46.689088 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:07:46.689096 | orchestrator | Monday 02 June 2025 18:07:41 +0000 (0:00:00.162) 0:00:10.924 *********** 2025-06-02 18:07:46.689103 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.689111 | orchestrator | 2025-06-02 18:07:46.689119 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:07:46.689127 | orchestrator | Monday 02 June 2025 18:07:42 +0000 (0:00:00.268) 0:00:11.193 *********** 2025-06-02 18:07:46.689135 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:07:46.689142 | orchestrator | 2025-06-02 18:07:46.689150 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:07:46.689160 | orchestrator | Monday 02 June 2025 18:07:42 +0000 (0:00:00.301) 0:00:11.494 *********** 2025-06-02 18:07:46.689169 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.689179 | orchestrator | 2025-06-02 18:07:46.689188 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:07:46.689198 | orchestrator | Monday 02 June 2025 18:07:43 +0000 (0:00:01.263) 0:00:12.758 *********** 2025-06-02 18:07:46.689207 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.689216 | orchestrator | 2025-06-02 18:07:46.689226 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:07:46.689235 | orchestrator | Monday 02 June 2025 18:07:43 +0000 (0:00:00.249) 0:00:13.008 *********** 2025-06-02 18:07:46.689245 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.689254 | orchestrator | 2025-06-02 18:07:46.689264 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.689273 | orchestrator | Monday 02 June 2025 18:07:44 +0000 (0:00:00.264) 0:00:13.272 *********** 2025-06-02 18:07:46.689282 | orchestrator | 2025-06-02 18:07:46.689292 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.689301 | orchestrator | Monday 02 June 2025 18:07:44 +0000 (0:00:00.071) 0:00:13.344 *********** 2025-06-02 18:07:46.689311 | orchestrator | 2025-06-02 18:07:46.689320 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:07:46.689329 | orchestrator | Monday 02 June 2025 18:07:44 +0000 (0:00:00.068) 0:00:13.412 *********** 2025-06-02 18:07:46.689339 | orchestrator | 2025-06-02 18:07:46.689348 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:07:46.689358 | orchestrator | Monday 02 June 2025 18:07:44 +0000 (0:00:00.077) 0:00:13.490 *********** 2025-06-02 18:07:46.689368 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:46.689377 | orchestrator | 2025-06-02 18:07:46.689387 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:07:46.689396 | orchestrator | Monday 02 June 2025 18:07:46 +0000 (0:00:01.807) 0:00:15.297 *********** 2025-06-02 18:07:46.689415 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:07:46.689425 | orchestrator |  "msg": [ 2025-06-02 18:07:46.689435 | orchestrator |  "Validator run completed.", 2025-06-02 18:07:46.689444 | orchestrator |  "You can find the report file here:", 2025-06-02 18:07:46.689454 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-02T18:07:31+00:00-report.json", 2025-06-02 18:07:46.689464 | orchestrator |  "on the following host:", 2025-06-02 18:07:46.689474 | orchestrator |  "testbed-manager" 2025-06-02 18:07:46.689483 | orchestrator |  ] 2025-06-02 18:07:46.689494 | orchestrator | } 2025-06-02 18:07:46.689504 | orchestrator | 2025-06-02 18:07:46.689520 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:07:46.689531 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:07:46.689541 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:07:46.689557 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:07:47.018054 | orchestrator | 2025-06-02 18:07:47.018159 | orchestrator | 2025-06-02 18:07:47.018170 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:07:47.018179 | orchestrator | Monday 02 June 2025 18:07:46 +0000 (0:00:00.458) 0:00:15.756 *********** 2025-06-02 18:07:47.018186 | orchestrator | =============================================================================== 2025-06-02 18:07:47.018194 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.98s 2025-06-02 18:07:47.018201 | orchestrator | Write report file ------------------------------------------------------- 1.81s 2025-06-02 18:07:47.018208 | orchestrator | Aggregate test results step one ----------------------------------------- 1.26s 2025-06-02 18:07:47.018214 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-06-02 18:07:47.018221 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-06-02 18:07:47.018227 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.83s 2025-06-02 18:07:47.018234 | orchestrator | Aggregate test results step one ----------------------------------------- 0.72s 2025-06-02 18:07:47.018240 | orchestrator | Get timestamp for report file ------------------------------------------- 0.66s 2025-06-02 18:07:47.018247 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-06-02 18:07:47.018254 | orchestrator | Print report file information ------------------------------------------- 0.46s 2025-06-02 18:07:47.018260 | orchestrator | Prepare test data ------------------------------------------------------- 0.41s 2025-06-02 18:07:47.018267 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-06-02 18:07:47.018273 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2025-06-02 18:07:47.018280 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.30s 2025-06-02 18:07:47.018287 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-02 18:07:47.018293 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.29s 2025-06-02 18:07:47.018300 | orchestrator | Print report file information ------------------------------------------- 0.29s 2025-06-02 18:07:47.018306 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.28s 2025-06-02 18:07:47.018313 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.27s 2025-06-02 18:07:47.018320 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-02 18:07:47.269120 | orchestrator | + osism validate ceph-osds 2025-06-02 18:07:48.965528 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:07:48.965654 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:07:48.965669 | orchestrator | Registering Redlock._release_script 2025-06-02 18:07:57.909372 | orchestrator | 2025-06-02 18:07:57.909490 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-02 18:07:57.909507 | orchestrator | 2025-06-02 18:07:57.909519 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-02 18:07:57.909530 | orchestrator | Monday 02 June 2025 18:07:53 +0000 (0:00:00.452) 0:00:00.452 *********** 2025-06-02 18:07:57.909542 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:57.909553 | orchestrator | 2025-06-02 18:07:57.909564 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-02 18:07:57.909575 | orchestrator | Monday 02 June 2025 18:07:54 +0000 (0:00:00.648) 0:00:01.101 *********** 2025-06-02 18:07:57.909586 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:57.909597 | orchestrator | 2025-06-02 18:07:57.909608 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-02 18:07:57.909619 | orchestrator | Monday 02 June 2025 18:07:54 +0000 (0:00:00.417) 0:00:01.518 *********** 2025-06-02 18:07:57.909630 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:07:57.909640 | orchestrator | 2025-06-02 18:07:57.909691 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-02 18:07:57.909712 | orchestrator | Monday 02 June 2025 18:07:55 +0000 (0:00:00.959) 0:00:02.478 *********** 2025-06-02 18:07:57.909724 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:07:57.909799 | orchestrator | 2025-06-02 18:07:57.909826 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 18:07:57.909838 | orchestrator | Monday 02 June 2025 18:07:55 +0000 (0:00:00.155) 0:00:02.634 *********** 2025-06-02 18:07:57.909849 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:07:57.909860 | orchestrator | 2025-06-02 18:07:57.909871 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 18:07:57.909882 | orchestrator | Monday 02 June 2025 18:07:55 +0000 (0:00:00.141) 0:00:02.775 *********** 2025-06-02 18:07:57.909893 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:07:57.909906 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:07:57.909918 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:07:57.909931 | orchestrator | 2025-06-02 18:07:57.909943 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-02 18:07:57.909956 | orchestrator | Monday 02 June 2025 18:07:56 +0000 (0:00:00.328) 0:00:03.104 *********** 2025-06-02 18:07:57.909969 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:07:57.909982 | orchestrator | 2025-06-02 18:07:57.909994 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-02 18:07:57.910006 | orchestrator | Monday 02 June 2025 18:07:56 +0000 (0:00:00.144) 0:00:03.248 *********** 2025-06-02 18:07:57.910080 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:07:57.910094 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:07:57.910107 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:07:57.910119 | orchestrator | 2025-06-02 18:07:57.910130 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-02 18:07:57.910141 | orchestrator | Monday 02 June 2025 18:07:56 +0000 (0:00:00.312) 0:00:03.561 *********** 2025-06-02 18:07:57.910152 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:07:57.910163 | orchestrator | 2025-06-02 18:07:57.910174 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:07:57.910185 | orchestrator | Monday 02 June 2025 18:07:57 +0000 (0:00:00.555) 0:00:04.116 *********** 2025-06-02 18:07:57.910195 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:07:57.910206 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:07:57.910218 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:07:57.910229 | orchestrator | 2025-06-02 18:07:57.910239 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-02 18:07:57.910250 | orchestrator | Monday 02 June 2025 18:07:57 +0000 (0:00:00.477) 0:00:04.594 *********** 2025-06-02 18:07:57.910285 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4409de6bbd72bdba2818f03323d17fac574e10ad227bcd0a7d7f9a1e83edfe48', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:07:57.910299 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31982cbb8c3b55372be9c817235331eff1bdd0578f0a89519a0d6fc211026b4a', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:57.910312 | orchestrator | skipping: [testbed-node-3] => (item={'id': '264f4272df50027ad6e5dcc9bcc4735214e05d56b6bf4b3f37a6bca535e12369', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:57.910334 | orchestrator | skipping: [testbed-node-3] => (item={'id': '80a37fa0f6bcb3ee8e3e945d84b1869496c29ddbd3b6d2e60a6974891e70d7ec', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:57.910346 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6de28986e7c08cfd251846945ef4027111fe19d6e7461b33fb67c9b3fbfaf994', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:57.910376 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b19e16726bd901ace3046436768762007def3adae4e5dea1dc7a9dfb39ae96f3', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:57.910388 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3f7d4b363957a6d32b999fba29ca7e3e56b1d6dc8c9de5069cacf973d0eaff84', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:07:57.910399 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4855ac0f5a9a5eddc5bf8f573e6fc63b84fab57746a5bcdebfea7ec1323913c4', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:57.910410 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'eecf57262f337b2b160117567e4b5204ba71525e16c8a8cc0b78dc7df7efd94d', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:57.910430 | orchestrator | skipping: [testbed-node-3] => (item={'id': '393de3648516b3b2aad4f86813d66b5db6483bb756b17ec815f5450a00273342', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:07:57.910442 | orchestrator | skipping: [testbed-node-3] => (item={'id': '4881f6a5fd2e8b48a228792d3e747eb76246799dc0d847d01e6f16e38effdea6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:07:57.910453 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9fcadb6e8b0a4a0ab27bbf95b8312db390366b243738e82e9590d45f022032cc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:07:57.910464 | orchestrator | ok: [testbed-node-3] => (item={'id': '93b0bc2b40a0a97e202def4dbc5a4bd9f83ed277217c8097d8ab3641eba0fcd6', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:07:57.910475 | orchestrator | ok: [testbed-node-3] => (item={'id': 'de9eebfcddfea5aaddb86bc8f9fbf258793f08743e880fd08db19a345fda2e7e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:07:57.910493 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3e61928fca075266ca7baf424612850767a1877d9e1bc9e28694d86c65df7613', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 18:07:57.910504 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f88f9203a8697066360908e821c88b8e3878cd8b140178e7a7cd5c97e5094178', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:07:57.910515 | orchestrator | skipping: [testbed-node-3] => (item={'id': '900d2ce2f47231734f885963f0d596a6b583cd70bd6edd968a853623b78ec1bb', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:07:57.910528 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5ca661edba0814f44080edb55e20b1198cddabc5717a4211934429c16b1e3015', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:07:57.910545 | orchestrator | skipping: [testbed-node-3] => (item={'id': '71ecbb4f52f4a92e012ee00a8e22657b96acef3fbbb982220d58db568fc9359b', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:07:57.910563 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3500239b7adba2fb39d7f1b71666f6c8fe61f654dfcc1cfed1238da6b73b30e4', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-02 18:07:57.910581 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'dec3d2d6118c84b83e61ab974ebe46479dc6b677985e891c699f6cd469ae9321', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:07:57.910610 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a7b4f31346084a20bc2531bd89da556008ae8862e645a53cf1f150efa815f59d', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.180682 | orchestrator | skipping: [testbed-node-4] => (item={'id': '53e0bfd0ec1c02289c6d843ed8ffbbda4929022880146557859785fe357153bf', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.180889 | orchestrator | skipping: [testbed-node-4] => (item={'id': '104c1fc0a623cc98b75d5f80843975498b18ed098a4a58ea53af18884a111b8b', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.180909 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9a3d2bb3b18daef7aeef2013ca48206fbb07c7d05e08cc76db429ba83237c387', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:58.180924 | orchestrator | skipping: [testbed-node-4] => (item={'id': '54139527a2d6b91c1074a58326c0799577dfabf72026603ad925f9bbaffd3655', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:58.180956 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49290c2d593eb9e44e3cd1c3200bca4bd239a2e7cd1540bbad0e6ce9fa57c5e2', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:07:58.180968 | orchestrator | skipping: [testbed-node-4] => (item={'id': '78b6ed9552dbf8e09fc5e8b872822b2bb18f6fb3db7d2506b5264306acc91e4d', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:58.181000 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ed5f6f07d5258a0f76e24fc20f8d27e27491bde0360c33fca835d9edce78f691', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:58.181013 | orchestrator | skipping: [testbed-node-4] => (item={'id': '35a73461d9e85e51508a962871f31dc98465a66108e7e20e47258c56b95b9407', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:07:58.181024 | orchestrator | skipping: [testbed-node-4] => (item={'id': '0208910f1e12fca51f9da75349e87d000c315e7427beabac46cea324f8ef0f42', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:07:58.181035 | orchestrator | skipping: [testbed-node-4] => (item={'id': '386a80dfe71da475c72f78feb2d05293804954f40bf2bbb49b3d147ec67f8a2d', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:07:58.181047 | orchestrator | ok: [testbed-node-4] => (item={'id': '9669f2583edaa82a9d00ed49f299a9fb4810d8ae58b301c967b00e17fec8c3ab', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:07:58.181059 | orchestrator | ok: [testbed-node-4] => (item={'id': '0cf70a6594bf042e5a22abef24e28fe119dc4eb97efb18e3f52c0b6239efbc2e', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:07:58.181070 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd4c53d3f41374fd028ecaf95358ebe02b253f56b77b7501deea768ca434cb816', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 18:07:58.181081 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'abecbdb08756917a78a8f383bcdaa06ddae1f7821606472b2f69b3f84d51ec7b', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:07:58.181093 | orchestrator | skipping: [testbed-node-4] => (item={'id': '30e4f30a5e05349158fedd1fdca3318398262108183fe210526032e6e979091c', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:07:58.181123 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e15da94dbc8d6ee00bb75124cf5981cc50f48ad2a108e0717ea14209584ce8ab', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:07:58.181135 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3f721dc5ac4a080ec04939e88921d8e792ae7631076faa347fb67d72c622ec59', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:07:58.181147 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'bffc41aafa90e5cf5192bc2473de74ed577da1ada04510843c67090a1f97d3c9', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-02 18:07:58.181164 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2fb967fc204e0becca902c83434ced0d21c4be1f72e9a2de95e1c0e6c9e551fe', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 8 minutes (healthy)'})  2025-06-02 18:07:58.181175 | orchestrator | skipping: [testbed-node-5] => (item={'id': '26b37f84fd3c7837ea1d5e342d040c1fa79ceee7c09249e1a94b3f3571091802', 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.181195 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bad7b0ba2af852b26fbd0001568fb0a87b749ff1922a8ac4386ec384797dd65a', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.181209 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd42a4adf07973b3eb4031f6e9d75c64a79a3a6531fc4347c12be136cd6a98c79', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-06-02 18:07:58.181223 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c316b2f47e1167ded9777bc21f32597ca2df4b016a5e6fb773b87dde2e4ad89d', 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:58.181236 | orchestrator | skipping: [testbed-node-5] => (item={'id': '58b2833e62617c3d1ced86e1a324cd67890c311d9734bcc2a59f187990b4cfaf', 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-02 18:07:58.181249 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dad17bb903094085f706608d6d610b810de6353bc7ea873ea4fa72c32b89b5d8', 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-06-02 18:07:58.181263 | orchestrator | skipping: [testbed-node-5] => (item={'id': '496c13a9a43558fa2cbcffd66c5c2b0d282efb63bfa0767f76734231f4050d4e', 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:58.181276 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3337f6b332127265763d59abd7ba248008068715029ded40a23086e0df9aced', 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 15 minutes'})  2025-06-02 18:07:58.181289 | orchestrator | skipping: [testbed-node-5] => (item={'id': '100661422c2958360646c9f4c1e0a0a8d22516155eac62971c35696fb7a163d0', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 21 minutes'})  2025-06-02 18:07:58.181302 | orchestrator | skipping: [testbed-node-5] => (item={'id': '911c4534054ef4d7595869c65863c3ceb1e0e8bf95221a3f3c39b6c84b4fed18', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-02 18:07:58.181315 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3cb80f9e26bbb6b42f0548fb02daab0f581eb11881909d3c3293899fc87a55fc', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-02 18:07:58.181335 | orchestrator | ok: [testbed-node-5] => (item={'id': '02620647503caf2f9c4f96d5e05e466426ee63eb288986f110e6b1089a88f871', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:08:06.930091 | orchestrator | ok: [testbed-node-5] => (item={'id': '50988d50376291ef1bf14cfcf2e914bf3e395e7190962756289a13eda4dfbf5a', 'image': 'registry.osism.tech/osism/ceph-daemon:reef', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 24 minutes'}) 2025-06-02 18:08:06.930234 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a58413a630f717b3e346999bf67caccab3766d4dc0a2f62de3ec29340fcd2cc0', 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-02 18:08:06.930256 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'debeda3cd07894866ff02252557d0a698f2d5f5d1360e223dd2dc887d4e14773', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:08:06.930309 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b68851fd0f9b208cec22b873e016e5b3f4c6aa8a32c5cd0d125fe15e51ae5dff', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-02 18:08:06.930324 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'acbb5963b814f26b6763ce3547928fc757992da216a6400b40b82f8207af9f57', 'image': 'registry.osism.tech/kolla/cron:2024.2', 'name': '/cron', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:08:06.930335 | orchestrator | skipping: [testbed-node-5] => (item={'id': '196ea670056acedae42029b0ece2ff01be0b0c3f46097312fdf4c67226d47768', 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 30 minutes'})  2025-06-02 18:08:06.930347 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0dfde24d312add880391dcc1e08a4b64b1cc1b48f53ef600ac1689342f45974b', 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'name': '/fluentd', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-02 18:08:06.930358 | orchestrator | 2025-06-02 18:08:06.930371 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-02 18:08:06.930383 | orchestrator | Monday 02 June 2025 18:07:58 +0000 (0:00:00.535) 0:00:05.129 *********** 2025-06-02 18:08:06.930394 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.930406 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.930417 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.930428 | orchestrator | 2025-06-02 18:08:06.930439 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-02 18:08:06.930450 | orchestrator | Monday 02 June 2025 18:07:58 +0000 (0:00:00.303) 0:00:05.433 *********** 2025-06-02 18:08:06.930461 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.930473 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:06.930483 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:06.930494 | orchestrator | 2025-06-02 18:08:06.930505 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-02 18:08:06.930516 | orchestrator | Monday 02 June 2025 18:07:58 +0000 (0:00:00.486) 0:00:05.919 *********** 2025-06-02 18:08:06.930527 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.930541 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.930553 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.930565 | orchestrator | 2025-06-02 18:08:06.930578 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:08:06.930592 | orchestrator | Monday 02 June 2025 18:07:59 +0000 (0:00:00.327) 0:00:06.247 *********** 2025-06-02 18:08:06.930605 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.930619 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.930631 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.930644 | orchestrator | 2025-06-02 18:08:06.930657 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-02 18:08:06.930670 | orchestrator | Monday 02 June 2025 18:07:59 +0000 (0:00:00.287) 0:00:06.535 *********** 2025-06-02 18:08:06.930683 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-02 18:08:06.930695 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-02 18:08:06.930705 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.930716 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-02 18:08:06.930753 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-02 18:08:06.930764 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:06.930775 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-02 18:08:06.930786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-02 18:08:06.930805 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:06.930816 | orchestrator | 2025-06-02 18:08:06.930827 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-02 18:08:06.930838 | orchestrator | Monday 02 June 2025 18:07:59 +0000 (0:00:00.318) 0:00:06.853 *********** 2025-06-02 18:08:06.930849 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.930859 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.930876 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.930896 | orchestrator | 2025-06-02 18:08:06.930949 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 18:08:06.930971 | orchestrator | Monday 02 June 2025 18:08:00 +0000 (0:00:00.501) 0:00:07.354 *********** 2025-06-02 18:08:06.930989 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931009 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:06.931029 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:06.931050 | orchestrator | 2025-06-02 18:08:06.931068 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-02 18:08:06.931082 | orchestrator | Monday 02 June 2025 18:08:00 +0000 (0:00:00.295) 0:00:07.650 *********** 2025-06-02 18:08:06.931092 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931104 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:06.931114 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:06.931125 | orchestrator | 2025-06-02 18:08:06.931136 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-02 18:08:06.931147 | orchestrator | Monday 02 June 2025 18:08:00 +0000 (0:00:00.290) 0:00:07.940 *********** 2025-06-02 18:08:06.931157 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931168 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.931179 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.931190 | orchestrator | 2025-06-02 18:08:06.931200 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:08:06.931212 | orchestrator | Monday 02 June 2025 18:08:01 +0000 (0:00:00.307) 0:00:08.248 *********** 2025-06-02 18:08:06.931222 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931233 | orchestrator | 2025-06-02 18:08:06.931244 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:08:06.931255 | orchestrator | Monday 02 June 2025 18:08:01 +0000 (0:00:00.666) 0:00:08.915 *********** 2025-06-02 18:08:06.931265 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931276 | orchestrator | 2025-06-02 18:08:06.931287 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:08:06.931297 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.263) 0:00:09.178 *********** 2025-06-02 18:08:06.931308 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931319 | orchestrator | 2025-06-02 18:08:06.931329 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:06.931340 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.249) 0:00:09.427 *********** 2025-06-02 18:08:06.931351 | orchestrator | 2025-06-02 18:08:06.931361 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:06.931372 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.069) 0:00:09.497 *********** 2025-06-02 18:08:06.931383 | orchestrator | 2025-06-02 18:08:06.931393 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:06.931404 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.067) 0:00:09.565 *********** 2025-06-02 18:08:06.931414 | orchestrator | 2025-06-02 18:08:06.931425 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:08:06.931436 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.070) 0:00:09.635 *********** 2025-06-02 18:08:06.931447 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931457 | orchestrator | 2025-06-02 18:08:06.931468 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-02 18:08:06.931479 | orchestrator | Monday 02 June 2025 18:08:02 +0000 (0:00:00.258) 0:00:09.894 *********** 2025-06-02 18:08:06.931499 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931510 | orchestrator | 2025-06-02 18:08:06.931521 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:08:06.931532 | orchestrator | Monday 02 June 2025 18:08:03 +0000 (0:00:00.258) 0:00:10.152 *********** 2025-06-02 18:08:06.931542 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931553 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.931564 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.931574 | orchestrator | 2025-06-02 18:08:06.931628 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-02 18:08:06.931641 | orchestrator | Monday 02 June 2025 18:08:03 +0000 (0:00:00.358) 0:00:10.511 *********** 2025-06-02 18:08:06.931652 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931663 | orchestrator | 2025-06-02 18:08:06.931674 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-02 18:08:06.931684 | orchestrator | Monday 02 June 2025 18:08:04 +0000 (0:00:00.640) 0:00:11.152 *********** 2025-06-02 18:08:06.931695 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-02 18:08:06.931706 | orchestrator | 2025-06-02 18:08:06.931717 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-02 18:08:06.931760 | orchestrator | Monday 02 June 2025 18:08:05 +0000 (0:00:01.711) 0:00:12.863 *********** 2025-06-02 18:08:06.931776 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931787 | orchestrator | 2025-06-02 18:08:06.931798 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-02 18:08:06.931809 | orchestrator | Monday 02 June 2025 18:08:06 +0000 (0:00:00.143) 0:00:13.007 *********** 2025-06-02 18:08:06.931819 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931830 | orchestrator | 2025-06-02 18:08:06.931841 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-02 18:08:06.931851 | orchestrator | Monday 02 June 2025 18:08:06 +0000 (0:00:00.315) 0:00:13.323 *********** 2025-06-02 18:08:06.931862 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:06.931873 | orchestrator | 2025-06-02 18:08:06.931883 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-02 18:08:06.931894 | orchestrator | Monday 02 June 2025 18:08:06 +0000 (0:00:00.135) 0:00:13.459 *********** 2025-06-02 18:08:06.931905 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931915 | orchestrator | 2025-06-02 18:08:06.931926 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:08:06.931937 | orchestrator | Monday 02 June 2025 18:08:06 +0000 (0:00:00.131) 0:00:13.590 *********** 2025-06-02 18:08:06.931948 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:06.931958 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:06.931969 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:06.931979 | orchestrator | 2025-06-02 18:08:06.931990 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-02 18:08:06.932009 | orchestrator | Monday 02 June 2025 18:08:06 +0000 (0:00:00.293) 0:00:13.884 *********** 2025-06-02 18:08:19.543515 | orchestrator | changed: [testbed-node-3] 2025-06-02 18:08:19.543630 | orchestrator | changed: [testbed-node-4] 2025-06-02 18:08:19.543646 | orchestrator | changed: [testbed-node-5] 2025-06-02 18:08:19.543658 | orchestrator | 2025-06-02 18:08:19.543670 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-02 18:08:19.543682 | orchestrator | Monday 02 June 2025 18:08:09 +0000 (0:00:02.589) 0:00:16.473 *********** 2025-06-02 18:08:19.543693 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.543705 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.543750 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.543763 | orchestrator | 2025-06-02 18:08:19.543774 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-02 18:08:19.543785 | orchestrator | Monday 02 June 2025 18:08:09 +0000 (0:00:00.342) 0:00:16.815 *********** 2025-06-02 18:08:19.543796 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.543806 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.543840 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.543851 | orchestrator | 2025-06-02 18:08:19.543862 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-02 18:08:19.543873 | orchestrator | Monday 02 June 2025 18:08:10 +0000 (0:00:00.492) 0:00:17.308 *********** 2025-06-02 18:08:19.543898 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:19.543909 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:19.543920 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:19.543930 | orchestrator | 2025-06-02 18:08:19.543941 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-02 18:08:19.543952 | orchestrator | Monday 02 June 2025 18:08:10 +0000 (0:00:00.305) 0:00:17.614 *********** 2025-06-02 18:08:19.543962 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.543973 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.543984 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.543994 | orchestrator | 2025-06-02 18:08:19.544007 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-02 18:08:19.544019 | orchestrator | Monday 02 June 2025 18:08:11 +0000 (0:00:00.521) 0:00:18.135 *********** 2025-06-02 18:08:19.544031 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:19.544044 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:19.544057 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:19.544069 | orchestrator | 2025-06-02 18:08:19.544082 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-02 18:08:19.544094 | orchestrator | Monday 02 June 2025 18:08:11 +0000 (0:00:00.287) 0:00:18.423 *********** 2025-06-02 18:08:19.544107 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:19.544119 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:19.544131 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:19.544144 | orchestrator | 2025-06-02 18:08:19.544156 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-02 18:08:19.544169 | orchestrator | Monday 02 June 2025 18:08:11 +0000 (0:00:00.310) 0:00:18.733 *********** 2025-06-02 18:08:19.544181 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.544194 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.544212 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.544231 | orchestrator | 2025-06-02 18:08:19.544248 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-02 18:08:19.544266 | orchestrator | Monday 02 June 2025 18:08:12 +0000 (0:00:00.501) 0:00:19.234 *********** 2025-06-02 18:08:19.544283 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.544300 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.544318 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.544336 | orchestrator | 2025-06-02 18:08:19.544354 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-02 18:08:19.544372 | orchestrator | Monday 02 June 2025 18:08:12 +0000 (0:00:00.724) 0:00:19.958 *********** 2025-06-02 18:08:19.544391 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.544411 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.544428 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.544447 | orchestrator | 2025-06-02 18:08:19.544459 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-02 18:08:19.544470 | orchestrator | Monday 02 June 2025 18:08:13 +0000 (0:00:00.349) 0:00:20.308 *********** 2025-06-02 18:08:19.544481 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:19.544492 | orchestrator | skipping: [testbed-node-4] 2025-06-02 18:08:19.544502 | orchestrator | skipping: [testbed-node-5] 2025-06-02 18:08:19.544513 | orchestrator | 2025-06-02 18:08:19.544524 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-02 18:08:19.544534 | orchestrator | Monday 02 June 2025 18:08:13 +0000 (0:00:00.310) 0:00:20.618 *********** 2025-06-02 18:08:19.544545 | orchestrator | ok: [testbed-node-3] 2025-06-02 18:08:19.544556 | orchestrator | ok: [testbed-node-4] 2025-06-02 18:08:19.544566 | orchestrator | ok: [testbed-node-5] 2025-06-02 18:08:19.544577 | orchestrator | 2025-06-02 18:08:19.544599 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-02 18:08:19.544610 | orchestrator | Monday 02 June 2025 18:08:13 +0000 (0:00:00.291) 0:00:20.910 *********** 2025-06-02 18:08:19.544621 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:08:19.544631 | orchestrator | 2025-06-02 18:08:19.544642 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-02 18:08:19.544653 | orchestrator | Monday 02 June 2025 18:08:14 +0000 (0:00:00.714) 0:00:21.624 *********** 2025-06-02 18:08:19.544663 | orchestrator | skipping: [testbed-node-3] 2025-06-02 18:08:19.544674 | orchestrator | 2025-06-02 18:08:19.544684 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-02 18:08:19.544695 | orchestrator | Monday 02 June 2025 18:08:14 +0000 (0:00:00.254) 0:00:21.879 *********** 2025-06-02 18:08:19.544706 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:08:19.544776 | orchestrator | 2025-06-02 18:08:19.544789 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-02 18:08:19.544799 | orchestrator | Monday 02 June 2025 18:08:16 +0000 (0:00:01.720) 0:00:23.599 *********** 2025-06-02 18:08:19.544810 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:08:19.544821 | orchestrator | 2025-06-02 18:08:19.544831 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-02 18:08:19.544842 | orchestrator | Monday 02 June 2025 18:08:16 +0000 (0:00:00.256) 0:00:23.855 *********** 2025-06-02 18:08:19.544872 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:08:19.544883 | orchestrator | 2025-06-02 18:08:19.544894 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:19.544905 | orchestrator | Monday 02 June 2025 18:08:17 +0000 (0:00:00.264) 0:00:24.120 *********** 2025-06-02 18:08:19.544915 | orchestrator | 2025-06-02 18:08:19.544926 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:19.544937 | orchestrator | Monday 02 June 2025 18:08:17 +0000 (0:00:00.068) 0:00:24.188 *********** 2025-06-02 18:08:19.544948 | orchestrator | 2025-06-02 18:08:19.544958 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-02 18:08:19.544969 | orchestrator | Monday 02 June 2025 18:08:17 +0000 (0:00:00.066) 0:00:24.255 *********** 2025-06-02 18:08:19.544979 | orchestrator | 2025-06-02 18:08:19.544990 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-02 18:08:19.545001 | orchestrator | Monday 02 June 2025 18:08:17 +0000 (0:00:00.070) 0:00:24.325 *********** 2025-06-02 18:08:19.545011 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-02 18:08:19.545021 | orchestrator | 2025-06-02 18:08:19.545040 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-02 18:08:19.545051 | orchestrator | Monday 02 June 2025 18:08:18 +0000 (0:00:01.270) 0:00:25.596 *********** 2025-06-02 18:08:19.545062 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-02 18:08:19.545073 | orchestrator |  "msg": [ 2025-06-02 18:08:19.545084 | orchestrator |  "Validator run completed.", 2025-06-02 18:08:19.545094 | orchestrator |  "You can find the report file here:", 2025-06-02 18:08:19.545105 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-02T18:07:54+00:00-report.json", 2025-06-02 18:08:19.545117 | orchestrator |  "on the following host:", 2025-06-02 18:08:19.545128 | orchestrator |  "testbed-manager" 2025-06-02 18:08:19.545138 | orchestrator |  ] 2025-06-02 18:08:19.545149 | orchestrator | } 2025-06-02 18:08:19.545160 | orchestrator | 2025-06-02 18:08:19.545171 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:08:19.545183 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-02 18:08:19.545196 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:08:19.545215 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-02 18:08:19.545226 | orchestrator | 2025-06-02 18:08:19.545236 | orchestrator | 2025-06-02 18:08:19.545247 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:08:19.545258 | orchestrator | Monday 02 June 2025 18:08:19 +0000 (0:00:00.585) 0:00:26.182 *********** 2025-06-02 18:08:19.545268 | orchestrator | =============================================================================== 2025-06-02 18:08:19.545279 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.59s 2025-06-02 18:08:19.545290 | orchestrator | Aggregate test results step one ----------------------------------------- 1.72s 2025-06-02 18:08:19.545300 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.71s 2025-06-02 18:08:19.545311 | orchestrator | Write report file ------------------------------------------------------- 1.27s 2025-06-02 18:08:19.545321 | orchestrator | Create report output directory ------------------------------------------ 0.96s 2025-06-02 18:08:19.545332 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.72s 2025-06-02 18:08:19.545342 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.71s 2025-06-02 18:08:19.545353 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-06-02 18:08:19.545364 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-02 18:08:19.545374 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.64s 2025-06-02 18:08:19.545385 | orchestrator | Print report file information ------------------------------------------- 0.59s 2025-06-02 18:08:19.545395 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.56s 2025-06-02 18:08:19.545406 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.54s 2025-06-02 18:08:19.545417 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.52s 2025-06-02 18:08:19.545427 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2025-06-02 18:08:19.545438 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.50s 2025-06-02 18:08:19.545448 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.49s 2025-06-02 18:08:19.545459 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.49s 2025-06-02 18:08:19.545470 | orchestrator | Prepare test data ------------------------------------------------------- 0.48s 2025-06-02 18:08:19.545480 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.42s 2025-06-02 18:08:19.795485 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-02 18:08:19.804806 | orchestrator | + set -e 2025-06-02 18:08:19.804884 | orchestrator | + source /opt/manager-vars.sh 2025-06-02 18:08:19.804898 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-02 18:08:19.804910 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-02 18:08:19.804922 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-02 18:08:19.804932 | orchestrator | ++ CEPH_VERSION=reef 2025-06-02 18:08:19.805278 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-02 18:08:19.805296 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-02 18:08:19.805307 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 18:08:19.805318 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 18:08:19.805329 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-02 18:08:19.805339 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-02 18:08:19.805350 | orchestrator | ++ export ARA=false 2025-06-02 18:08:19.805360 | orchestrator | ++ ARA=false 2025-06-02 18:08:19.805371 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-02 18:08:19.805381 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-02 18:08:19.805392 | orchestrator | ++ export TEMPEST=false 2025-06-02 18:08:19.805402 | orchestrator | ++ TEMPEST=false 2025-06-02 18:08:19.805413 | orchestrator | ++ export IS_ZUUL=true 2025-06-02 18:08:19.805423 | orchestrator | ++ IS_ZUUL=true 2025-06-02 18:08:19.805434 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 18:08:19.805470 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.65 2025-06-02 18:08:19.805480 | orchestrator | ++ export EXTERNAL_API=false 2025-06-02 18:08:19.805491 | orchestrator | ++ EXTERNAL_API=false 2025-06-02 18:08:19.805501 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-02 18:08:19.805511 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-02 18:08:19.805522 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-02 18:08:19.805533 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-02 18:08:19.805556 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-02 18:08:19.805567 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-02 18:08:19.805577 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-02 18:08:19.805588 | orchestrator | + source /etc/os-release 2025-06-02 18:08:19.805599 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-02 18:08:19.805610 | orchestrator | ++ NAME=Ubuntu 2025-06-02 18:08:19.805620 | orchestrator | ++ VERSION_ID=24.04 2025-06-02 18:08:19.805631 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-02 18:08:19.805641 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-02 18:08:19.805652 | orchestrator | ++ ID=ubuntu 2025-06-02 18:08:19.805662 | orchestrator | ++ ID_LIKE=debian 2025-06-02 18:08:19.805673 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-02 18:08:19.805698 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-02 18:08:19.805709 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-02 18:08:19.805750 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-02 18:08:19.805762 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-02 18:08:19.805773 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-02 18:08:19.805783 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-02 18:08:19.805794 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-02 18:08:19.805807 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 18:08:19.840550 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-02 18:08:44.499903 | orchestrator | 2025-06-02 18:08:44.500021 | orchestrator | # Status of Elasticsearch 2025-06-02 18:08:44.500038 | orchestrator | 2025-06-02 18:08:44.500050 | orchestrator | + pushd /opt/configuration/contrib 2025-06-02 18:08:44.500063 | orchestrator | + echo 2025-06-02 18:08:44.500074 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-02 18:08:44.500085 | orchestrator | + echo 2025-06-02 18:08:44.500097 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-02 18:08:44.681908 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-02 18:08:44.681984 | orchestrator | 2025-06-02 18:08:44.681992 | orchestrator | # Status of MariaDB 2025-06-02 18:08:44.681999 | orchestrator | 2025-06-02 18:08:44.682005 | orchestrator | + echo 2025-06-02 18:08:44.682050 | orchestrator | + echo '# Status of MariaDB' 2025-06-02 18:08:44.682057 | orchestrator | + echo 2025-06-02 18:08:44.682062 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-02 18:08:44.682069 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-02 18:08:44.758814 | orchestrator | Reading package lists... 2025-06-02 18:08:45.115116 | orchestrator | Building dependency tree... 2025-06-02 18:08:45.115609 | orchestrator | Reading state information... 2025-06-02 18:08:45.642935 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-02 18:08:45.643092 | orchestrator | bc set to manually installed. 2025-06-02 18:08:45.643124 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-02 18:08:46.353575 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-02 18:08:46.354146 | orchestrator | 2025-06-02 18:08:46.354174 | orchestrator | # Status of Prometheus 2025-06-02 18:08:46.354182 | orchestrator | 2025-06-02 18:08:46.354189 | orchestrator | + echo 2025-06-02 18:08:46.354197 | orchestrator | + echo '# Status of Prometheus' 2025-06-02 18:08:46.354204 | orchestrator | + echo 2025-06-02 18:08:46.354212 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-02 18:08:46.424261 | orchestrator | Unauthorized 2025-06-02 18:08:46.427255 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-02 18:08:46.524633 | orchestrator | Unauthorized 2025-06-02 18:08:46.527969 | orchestrator | 2025-06-02 18:08:46.528045 | orchestrator | # Status of RabbitMQ 2025-06-02 18:08:46.528059 | orchestrator | 2025-06-02 18:08:46.528071 | orchestrator | + echo 2025-06-02 18:08:46.528082 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-02 18:08:46.528093 | orchestrator | + echo 2025-06-02 18:08:46.528105 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-02 18:08:47.023469 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-02 18:08:47.032597 | orchestrator | 2025-06-02 18:08:47.032694 | orchestrator | # Status of Redis 2025-06-02 18:08:47.032750 | orchestrator | 2025-06-02 18:08:47.032761 | orchestrator | + echo 2025-06-02 18:08:47.032770 | orchestrator | + echo '# Status of Redis' 2025-06-02 18:08:47.032780 | orchestrator | + echo 2025-06-02 18:08:47.032791 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-02 18:08:47.038463 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001794s;;;0.000000;10.000000 2025-06-02 18:08:47.038553 | orchestrator | 2025-06-02 18:08:47.038566 | orchestrator | + popd 2025-06-02 18:08:47.038576 | orchestrator | + echo 2025-06-02 18:08:47.038584 | orchestrator | # Create backup of MariaDB database 2025-06-02 18:08:47.038594 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-02 18:08:47.038604 | orchestrator | 2025-06-02 18:08:47.038613 | orchestrator | + echo 2025-06-02 18:08:47.038622 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-02 18:08:48.836949 | orchestrator | 2025-06-02 18:08:48 | INFO  | Task 80504a9c-5496-4024-80b8-823f12dfadc8 (mariadb_backup) was prepared for execution. 2025-06-02 18:08:48.837051 | orchestrator | 2025-06-02 18:08:48 | INFO  | It takes a moment until task 80504a9c-5496-4024-80b8-823f12dfadc8 (mariadb_backup) has been started and output is visible here. 2025-06-02 18:08:52.884640 | orchestrator | 2025-06-02 18:08:52.888687 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-02 18:08:52.889141 | orchestrator | 2025-06-02 18:08:52.889539 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-02 18:08:52.890461 | orchestrator | Monday 02 June 2025 18:08:52 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-02 18:08:53.079111 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:08:53.219591 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:08:53.220505 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:08:53.222247 | orchestrator | 2025-06-02 18:08:53.223429 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-02 18:08:53.224362 | orchestrator | Monday 02 June 2025 18:08:53 +0000 (0:00:00.338) 0:00:00.521 *********** 2025-06-02 18:08:53.800631 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-02 18:08:53.800917 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-02 18:08:53.802148 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-02 18:08:53.803386 | orchestrator | 2025-06-02 18:08:53.803913 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-02 18:08:53.804849 | orchestrator | 2025-06-02 18:08:53.805639 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-02 18:08:53.806140 | orchestrator | Monday 02 June 2025 18:08:53 +0000 (0:00:00.580) 0:00:01.101 *********** 2025-06-02 18:08:54.230521 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-02 18:08:54.233229 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-02 18:08:54.234333 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-02 18:08:54.235674 | orchestrator | 2025-06-02 18:08:54.236870 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-02 18:08:54.237861 | orchestrator | Monday 02 June 2025 18:08:54 +0000 (0:00:00.428) 0:00:01.530 *********** 2025-06-02 18:08:54.793692 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-02 18:08:54.799087 | orchestrator | 2025-06-02 18:08:54.799281 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-02 18:08:54.800252 | orchestrator | Monday 02 June 2025 18:08:54 +0000 (0:00:00.564) 0:00:02.095 *********** 2025-06-02 18:08:57.929085 | orchestrator | ok: [testbed-node-1] 2025-06-02 18:08:57.931267 | orchestrator | ok: [testbed-node-0] 2025-06-02 18:08:57.931322 | orchestrator | ok: [testbed-node-2] 2025-06-02 18:08:57.932412 | orchestrator | 2025-06-02 18:08:57.934068 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-02 18:08:57.934967 | orchestrator | Monday 02 June 2025 18:08:57 +0000 (0:00:03.131) 0:00:05.226 *********** 2025-06-02 18:11:22.650886 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:11:22.651008 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:11:22.651024 | orchestrator | 2025-06-02 18:11:22.651037 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-06-02 18:11:37.961877 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-02 18:11:37.961992 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-02 18:11:37.962007 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-02 18:11:37.964222 | orchestrator | mariadb_bootstrap_restart 2025-06-02 18:11:38.053345 | orchestrator | changed: [testbed-node-0] 2025-06-02 18:11:38.056047 | orchestrator | 2025-06-02 18:11:38.065654 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-02 18:11:38.065849 | orchestrator | skipping: no hosts matched 2025-06-02 18:11:38.065868 | orchestrator | 2025-06-02 18:11:38.066674 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-02 18:11:38.067060 | orchestrator | skipping: no hosts matched 2025-06-02 18:11:38.067398 | orchestrator | 2025-06-02 18:11:38.067906 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-02 18:11:38.068244 | orchestrator | skipping: no hosts matched 2025-06-02 18:11:38.068425 | orchestrator | 2025-06-02 18:11:38.068661 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-02 18:11:38.069029 | orchestrator | 2025-06-02 18:11:38.069423 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-02 18:11:38.072276 | orchestrator | Monday 02 June 2025 18:11:38 +0000 (0:02:40.122) 0:02:45.349 *********** 2025-06-02 18:11:38.249311 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:11:38.380388 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:11:38.381543 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:11:38.382936 | orchestrator | 2025-06-02 18:11:38.384122 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-02 18:11:38.384880 | orchestrator | Monday 02 June 2025 18:11:38 +0000 (0:00:00.332) 0:02:45.682 *********** 2025-06-02 18:11:38.771525 | orchestrator | skipping: [testbed-node-0] 2025-06-02 18:11:38.820960 | orchestrator | skipping: [testbed-node-1] 2025-06-02 18:11:38.821442 | orchestrator | skipping: [testbed-node-2] 2025-06-02 18:11:38.822413 | orchestrator | 2025-06-02 18:11:38.822882 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:11:38.823418 | orchestrator | 2025-06-02 18:11:38 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:11:38.823529 | orchestrator | 2025-06-02 18:11:38 | INFO  | Please wait and do not abort execution. 2025-06-02 18:11:38.824058 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-02 18:11:38.825193 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 18:11:38.825287 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-02 18:11:38.826491 | orchestrator | 2025-06-02 18:11:38.827364 | orchestrator | 2025-06-02 18:11:38.827931 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:11:38.828607 | orchestrator | Monday 02 June 2025 18:11:38 +0000 (0:00:00.439) 0:02:46.121 *********** 2025-06-02 18:11:38.829129 | orchestrator | =============================================================================== 2025-06-02 18:11:38.829648 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 160.12s 2025-06-02 18:11:38.830163 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.13s 2025-06-02 18:11:38.830846 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-06-02 18:11:38.831532 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2025-06-02 18:11:38.832159 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.44s 2025-06-02 18:11:38.832772 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.43s 2025-06-02 18:11:38.833219 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2025-06-02 18:11:38.833564 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.33s 2025-06-02 18:11:39.443959 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-02 18:11:39.456231 | orchestrator | + set -e 2025-06-02 18:11:39.456351 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-02 18:11:39.456379 | orchestrator | ++ export INTERACTIVE=false 2025-06-02 18:11:39.456397 | orchestrator | ++ INTERACTIVE=false 2025-06-02 18:11:39.456413 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-02 18:11:39.456430 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-02 18:11:39.456448 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-02 18:11:39.457244 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-02 18:11:39.463834 | orchestrator | 2025-06-02 18:11:39.463924 | orchestrator | # OpenStack endpoints 2025-06-02 18:11:39.463948 | orchestrator | 2025-06-02 18:11:39.463968 | orchestrator | ++ export MANAGER_VERSION=latest 2025-06-02 18:11:39.463987 | orchestrator | ++ MANAGER_VERSION=latest 2025-06-02 18:11:39.464004 | orchestrator | + export OS_CLOUD=admin 2025-06-02 18:11:39.464016 | orchestrator | + OS_CLOUD=admin 2025-06-02 18:11:39.464027 | orchestrator | + echo 2025-06-02 18:11:39.464038 | orchestrator | + echo '# OpenStack endpoints' 2025-06-02 18:11:39.464049 | orchestrator | + echo 2025-06-02 18:11:39.464060 | orchestrator | + openstack endpoint list 2025-06-02 18:11:43.325411 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:11:43.325549 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-02 18:11:43.325564 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:11:43.325576 | orchestrator | | 0c3c0130eae3477e9c0e294492cc23b1 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-02 18:11:43.325588 | orchestrator | | 1438bdbd3ab047138d8d19d28b5480fc | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-02 18:11:43.325599 | orchestrator | | 18ee861fae1343ad922e524f65cff73c | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-02 18:11:43.325610 | orchestrator | | 2011ee28f2d4462abc5cfa127b8c6e76 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-02 18:11:43.325621 | orchestrator | | 47e02ffd89bd438488aa694ff7152fb1 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-02 18:11:43.325632 | orchestrator | | 577f71882eb54b2c9160f9867196d953 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-02 18:11:43.325671 | orchestrator | | 61a527659bb346ef8fdb1f2f3d94baf4 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-02 18:11:43.325761 | orchestrator | | 6a07bfc2ac50448995a129253c4ac0db | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-02 18:11:43.325774 | orchestrator | | 71c32023083242d88771fa07175d1469 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-02 18:11:43.325785 | orchestrator | | 80fbde2c73234ff58d45eb87d77bcf35 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-02 18:11:43.325796 | orchestrator | | 9c3feb4add764347a2e06d48c1fca913 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-02 18:11:43.325806 | orchestrator | | a0e82303f398474db763b98cf5a9bc07 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-02 18:11:43.325817 | orchestrator | | b5d08d99d317420e80e2120c25de2803 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 18:11:43.325828 | orchestrator | | bb4b95db96cf4b3281665f90a82df889 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-02 18:11:43.325839 | orchestrator | | c3277bfb73e443b9a45d2dda4966d2eb | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-02 18:11:43.325853 | orchestrator | | cc24ac5d67d34cb58399371304a4cd52 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-02 18:11:43.325872 | orchestrator | | cc760125f7b84d76bf556fd3ca70320e | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-02 18:11:43.325889 | orchestrator | | de29136e72304324acd2fcfa4d240593 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-02 18:11:43.325907 | orchestrator | | e2e397ebb23f480b832ff75dd13e9df9 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-02 18:11:43.325950 | orchestrator | | e920bae624584fd09b18596ce3d3bc32 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-02 18:11:43.325996 | orchestrator | | f252211cfdf7462991c6b6ce0ab0eb7c | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 18:11:43.326011 | orchestrator | | f58908ef18f448b59b159e8d1308f70d | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-02 18:11:43.326086 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-02 18:11:43.612736 | orchestrator | 2025-06-02 18:11:43.612828 | orchestrator | # Cinder 2025-06-02 18:11:43.612834 | orchestrator | 2025-06-02 18:11:43.612839 | orchestrator | + echo 2025-06-02 18:11:43.612843 | orchestrator | + echo '# Cinder' 2025-06-02 18:11:43.612848 | orchestrator | + echo 2025-06-02 18:11:43.612853 | orchestrator | + openstack volume service list 2025-06-02 18:11:47.067645 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:47.067819 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 18:11:47.067856 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:47.067867 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T18:11:41.000000 | 2025-06-02 18:11:47.067880 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T18:11:43.000000 | 2025-06-02 18:11:47.067895 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T18:11:44.000000 | 2025-06-02 18:11:47.067908 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-02T18:11:45.000000 | 2025-06-02 18:11:47.067922 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-02T18:11:46.000000 | 2025-06-02 18:11:47.067937 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-02T18:11:42.000000 | 2025-06-02 18:11:47.067951 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-02T18:11:38.000000 | 2025-06-02 18:11:47.067965 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-02T18:11:39.000000 | 2025-06-02 18:11:47.067979 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-02T18:11:39.000000 | 2025-06-02 18:11:47.067993 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:47.360485 | orchestrator | 2025-06-02 18:11:47.360599 | orchestrator | # Neutron 2025-06-02 18:11:47.360621 | orchestrator | 2025-06-02 18:11:47.360638 | orchestrator | + echo 2025-06-02 18:11:47.360655 | orchestrator | + echo '# Neutron' 2025-06-02 18:11:47.360673 | orchestrator | + echo 2025-06-02 18:11:47.360760 | orchestrator | + openstack network agent list 2025-06-02 18:11:50.274732 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:11:50.274837 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-02 18:11:50.274850 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:11:50.274860 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274869 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274878 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274886 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274895 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274908 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-02 18:11:50.274923 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:11:50.274938 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:11:50.274973 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-02 18:11:50.274989 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-02 18:11:50.581809 | orchestrator | + openstack network service provider list 2025-06-02 18:11:53.199105 | orchestrator | +---------------+------+---------+ 2025-06-02 18:11:53.199230 | orchestrator | | Service Type | Name | Default | 2025-06-02 18:11:53.199250 | orchestrator | +---------------+------+---------+ 2025-06-02 18:11:53.199263 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-02 18:11:53.199276 | orchestrator | +---------------+------+---------+ 2025-06-02 18:11:53.505397 | orchestrator | 2025-06-02 18:11:53.505497 | orchestrator | # Nova 2025-06-02 18:11:53.505512 | orchestrator | 2025-06-02 18:11:53.505524 | orchestrator | + echo 2025-06-02 18:11:53.505535 | orchestrator | + echo '# Nova' 2025-06-02 18:11:53.505546 | orchestrator | + echo 2025-06-02 18:11:53.505557 | orchestrator | + openstack compute service list 2025-06-02 18:11:56.420152 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:56.420262 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-02 18:11:56.420277 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:56.420290 | orchestrator | | 9fe6b745-10f4-4f8b-b9b5-5d06b3b4fcac | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-02T18:11:51.000000 | 2025-06-02 18:11:56.420301 | orchestrator | | 63bdbd08-d92a-4d3a-a284-f42a99a3b2c3 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-02T18:11:52.000000 | 2025-06-02 18:11:56.420312 | orchestrator | | 5f9b8318-148e-4f01-bca8-78dcadb0f9ac | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-02T18:11:48.000000 | 2025-06-02 18:11:56.420323 | orchestrator | | 50a5feaa-1aac-4aad-a0d5-cbded64424bb | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-02T18:11:51.000000 | 2025-06-02 18:11:56.420334 | orchestrator | | 24e26926-44f9-4172-8a7a-262180123cd4 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-02T18:11:51.000000 | 2025-06-02 18:11:56.420345 | orchestrator | | 0af427a7-752b-4387-8830-eb8c6ff69c90 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-02T18:11:52.000000 | 2025-06-02 18:11:56.420356 | orchestrator | | 2f9b08a6-fb92-4188-a2e1-527de7fe1ab0 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-02T18:11:55.000000 | 2025-06-02 18:11:56.420367 | orchestrator | | 7e260e75-fbc4-43cc-814a-e15b14eb9651 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-02T18:11:56.000000 | 2025-06-02 18:11:56.420378 | orchestrator | | 1ff4d064-6264-4f47-b265-2bb6fa2ce4ac | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-02T18:11:47.000000 | 2025-06-02 18:11:56.420389 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-02 18:11:56.705447 | orchestrator | + openstack hypervisor list 2025-06-02 18:12:01.083039 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:12:01.083180 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-02 18:12:01.083203 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:12:01.083215 | orchestrator | | 61eb5687-bab4-4cc7-bf84-327f9ae15cbd | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-02 18:12:01.083226 | orchestrator | | 2ee3aaa8-7c94-482d-b8ac-5e4a110f32b3 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-02 18:12:01.083237 | orchestrator | | 83bddb9a-0799-44db-aa62-9ee6dce52bfe | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-02 18:12:01.083248 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-02 18:12:01.356909 | orchestrator | 2025-06-02 18:12:01.357017 | orchestrator | # Run OpenStack test play 2025-06-02 18:12:01.357026 | orchestrator | 2025-06-02 18:12:01.357031 | orchestrator | + echo 2025-06-02 18:12:01.357036 | orchestrator | + echo '# Run OpenStack test play' 2025-06-02 18:12:01.357042 | orchestrator | + echo 2025-06-02 18:12:01.357066 | orchestrator | + osism apply --environment openstack test 2025-06-02 18:12:03.072017 | orchestrator | 2025-06-02 18:12:03 | INFO  | Trying to run play test in environment openstack 2025-06-02 18:12:03.077807 | orchestrator | Registering Redlock._acquired_script 2025-06-02 18:12:03.077863 | orchestrator | Registering Redlock._extend_script 2025-06-02 18:12:03.077871 | orchestrator | Registering Redlock._release_script 2025-06-02 18:12:03.140651 | orchestrator | 2025-06-02 18:12:03 | INFO  | Task 052e18bf-dfb7-4b29-9f39-7cf29fe1c946 (test) was prepared for execution. 2025-06-02 18:12:03.140801 | orchestrator | 2025-06-02 18:12:03 | INFO  | It takes a moment until task 052e18bf-dfb7-4b29-9f39-7cf29fe1c946 (test) has been started and output is visible here. 2025-06-02 18:12:07.235185 | orchestrator | 2025-06-02 18:12:07.235454 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-02 18:12:07.235480 | orchestrator | 2025-06-02 18:12:07.237190 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-02 18:12:07.238483 | orchestrator | Monday 02 June 2025 18:12:07 +0000 (0:00:00.090) 0:00:00.090 *********** 2025-06-02 18:12:10.893134 | orchestrator | changed: [localhost] 2025-06-02 18:12:10.893721 | orchestrator | 2025-06-02 18:12:10.894769 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-02 18:12:10.896589 | orchestrator | Monday 02 June 2025 18:12:10 +0000 (0:00:03.661) 0:00:03.751 *********** 2025-06-02 18:12:15.126984 | orchestrator | changed: [localhost] 2025-06-02 18:12:15.129889 | orchestrator | 2025-06-02 18:12:15.130052 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-02 18:12:15.132472 | orchestrator | Monday 02 June 2025 18:12:15 +0000 (0:00:04.231) 0:00:07.982 *********** 2025-06-02 18:12:21.509570 | orchestrator | changed: [localhost] 2025-06-02 18:12:21.511275 | orchestrator | 2025-06-02 18:12:21.512108 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-02 18:12:21.515215 | orchestrator | Monday 02 June 2025 18:12:21 +0000 (0:00:06.384) 0:00:14.366 *********** 2025-06-02 18:12:25.719391 | orchestrator | changed: [localhost] 2025-06-02 18:12:25.719788 | orchestrator | 2025-06-02 18:12:25.721034 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-02 18:12:25.723460 | orchestrator | Monday 02 June 2025 18:12:25 +0000 (0:00:04.208) 0:00:18.575 *********** 2025-06-02 18:12:29.895875 | orchestrator | changed: [localhost] 2025-06-02 18:12:29.895999 | orchestrator | 2025-06-02 18:12:29.896479 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-02 18:12:29.897168 | orchestrator | Monday 02 June 2025 18:12:29 +0000 (0:00:04.178) 0:00:22.753 *********** 2025-06-02 18:12:42.207320 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-02 18:12:42.207467 | orchestrator | changed: [localhost] => (item=member) 2025-06-02 18:12:42.209218 | orchestrator | changed: [localhost] => (item=creator) 2025-06-02 18:12:42.209761 | orchestrator | 2025-06-02 18:12:42.210539 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-02 18:12:42.212065 | orchestrator | Monday 02 June 2025 18:12:42 +0000 (0:00:12.307) 0:00:35.061 *********** 2025-06-02 18:12:46.672266 | orchestrator | changed: [localhost] 2025-06-02 18:12:46.672501 | orchestrator | 2025-06-02 18:12:46.673184 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-02 18:12:46.673743 | orchestrator | Monday 02 June 2025 18:12:46 +0000 (0:00:04.468) 0:00:39.529 *********** 2025-06-02 18:12:51.635315 | orchestrator | changed: [localhost] 2025-06-02 18:12:51.635468 | orchestrator | 2025-06-02 18:12:51.636471 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-02 18:12:51.637961 | orchestrator | Monday 02 June 2025 18:12:51 +0000 (0:00:04.962) 0:00:44.492 *********** 2025-06-02 18:12:55.928385 | orchestrator | changed: [localhost] 2025-06-02 18:12:55.928583 | orchestrator | 2025-06-02 18:12:55.928621 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-02 18:12:55.929359 | orchestrator | Monday 02 June 2025 18:12:55 +0000 (0:00:04.293) 0:00:48.786 *********** 2025-06-02 18:12:59.925448 | orchestrator | changed: [localhost] 2025-06-02 18:12:59.926764 | orchestrator | 2025-06-02 18:12:59.927639 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-02 18:12:59.929652 | orchestrator | Monday 02 June 2025 18:12:59 +0000 (0:00:03.995) 0:00:52.782 *********** 2025-06-02 18:13:04.244562 | orchestrator | changed: [localhost] 2025-06-02 18:13:04.246120 | orchestrator | 2025-06-02 18:13:04.246835 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-02 18:13:04.247823 | orchestrator | Monday 02 June 2025 18:13:04 +0000 (0:00:04.319) 0:00:57.102 *********** 2025-06-02 18:13:08.729491 | orchestrator | changed: [localhost] 2025-06-02 18:13:08.729589 | orchestrator | 2025-06-02 18:13:08.731406 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-02 18:13:08.733069 | orchestrator | Monday 02 June 2025 18:13:08 +0000 (0:00:04.484) 0:01:01.586 *********** 2025-06-02 18:13:25.593144 | orchestrator | changed: [localhost] 2025-06-02 18:13:25.593261 | orchestrator | 2025-06-02 18:13:25.593277 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-02 18:13:25.593293 | orchestrator | Monday 02 June 2025 18:13:25 +0000 (0:00:16.862) 0:01:18.448 *********** 2025-06-02 18:15:37.092467 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:15:37.092666 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:15:37.092687 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:15:37.452118 | orchestrator | 2025-06-02 18:15:37.452198 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:16:07.094550 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:16:07.094702 | orchestrator | 2025-06-02 18:16:07.094719 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:16:37.094235 | orchestrator | 2025-06-02 18:16:37.094357 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-02 18:16:37.871391 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:16:37.874828 | orchestrator | 2025-06-02 18:16:37.874861 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-02 18:16:37.874872 | orchestrator | Monday 02 June 2025 18:16:37 +0000 (0:03:12.282) 0:04:30.731 *********** 2025-06-02 18:17:02.072021 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:17:02.072148 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:17:02.072165 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:17:02.072177 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:17:02.072189 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:17:02.073136 | orchestrator | 2025-06-02 18:17:02.073453 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-02 18:17:02.074407 | orchestrator | Monday 02 June 2025 18:17:02 +0000 (0:00:24.194) 0:04:54.926 *********** 2025-06-02 18:17:34.780432 | orchestrator | changed: [localhost] => (item=test) 2025-06-02 18:17:34.780555 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-02 18:17:34.780635 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-02 18:17:34.780815 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-02 18:17:34.780834 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-02 18:17:34.780846 | orchestrator | 2025-06-02 18:17:34.780858 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-02 18:17:34.781082 | orchestrator | Monday 02 June 2025 18:17:34 +0000 (0:00:32.706) 0:05:27.633 *********** 2025-06-02 18:17:42.694223 | orchestrator | changed: [localhost] 2025-06-02 18:17:42.695313 | orchestrator | 2025-06-02 18:17:42.695818 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-02 18:17:42.696442 | orchestrator | Monday 02 June 2025 18:17:42 +0000 (0:00:07.920) 0:05:35.553 *********** 2025-06-02 18:17:56.285104 | orchestrator | changed: [localhost] 2025-06-02 18:17:56.285212 | orchestrator | 2025-06-02 18:17:56.285245 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-02 18:17:56.285279 | orchestrator | Monday 02 June 2025 18:17:56 +0000 (0:00:13.586) 0:05:49.139 *********** 2025-06-02 18:18:01.403200 | orchestrator | ok: [localhost] 2025-06-02 18:18:01.403444 | orchestrator | 2025-06-02 18:18:01.403482 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-02 18:18:01.403862 | orchestrator | Monday 02 June 2025 18:18:01 +0000 (0:00:05.123) 0:05:54.262 *********** 2025-06-02 18:18:01.455641 | orchestrator | ok: [localhost] => { 2025-06-02 18:18:01.456410 | orchestrator |  "msg": "192.168.112.108" 2025-06-02 18:18:01.457992 | orchestrator | } 2025-06-02 18:18:01.458705 | orchestrator | 2025-06-02 18:18:01.460008 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-02 18:18:01.460356 | orchestrator | 2025-06-02 18:18:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-02 18:18:01.460391 | orchestrator | 2025-06-02 18:18:01 | INFO  | Please wait and do not abort execution. 2025-06-02 18:18:01.461112 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-02 18:18:01.461652 | orchestrator | 2025-06-02 18:18:01.462925 | orchestrator | 2025-06-02 18:18:01.463472 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-02 18:18:01.463794 | orchestrator | Monday 02 June 2025 18:18:01 +0000 (0:00:00.050) 0:05:54.313 *********** 2025-06-02 18:18:01.464939 | orchestrator | =============================================================================== 2025-06-02 18:18:01.464996 | orchestrator | Create test instances ------------------------------------------------- 192.28s 2025-06-02 18:18:01.465768 | orchestrator | Add tag to instances --------------------------------------------------- 32.71s 2025-06-02 18:18:01.466093 | orchestrator | Add metadata to instances ---------------------------------------------- 24.20s 2025-06-02 18:18:01.466594 | orchestrator | Create test network topology ------------------------------------------- 16.86s 2025-06-02 18:18:01.466933 | orchestrator | Attach test volume ----------------------------------------------------- 13.59s 2025-06-02 18:18:01.467441 | orchestrator | Add member roles to user test ------------------------------------------ 12.31s 2025-06-02 18:18:01.467853 | orchestrator | Create test volume ------------------------------------------------------ 7.92s 2025-06-02 18:18:01.468154 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.38s 2025-06-02 18:18:01.468526 | orchestrator | Create floating ip address ---------------------------------------------- 5.12s 2025-06-02 18:18:01.469048 | orchestrator | Create ssh security group ----------------------------------------------- 4.96s 2025-06-02 18:18:01.469494 | orchestrator | Create test keypair ----------------------------------------------------- 4.48s 2025-06-02 18:18:01.470069 | orchestrator | Create test server group ------------------------------------------------ 4.47s 2025-06-02 18:18:01.471693 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.32s 2025-06-02 18:18:01.472472 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.29s 2025-06-02 18:18:01.473319 | orchestrator | Create test-admin user -------------------------------------------------- 4.23s 2025-06-02 18:18:01.473588 | orchestrator | Create test project ----------------------------------------------------- 4.21s 2025-06-02 18:18:01.474281 | orchestrator | Create test user -------------------------------------------------------- 4.18s 2025-06-02 18:18:01.475153 | orchestrator | Create icmp security group ---------------------------------------------- 4.00s 2025-06-02 18:18:01.475586 | orchestrator | Create test domain ------------------------------------------------------ 3.66s 2025-06-02 18:18:01.475947 | orchestrator | Print floating ip address ----------------------------------------------- 0.05s 2025-06-02 18:18:01.987136 | orchestrator | + server_list 2025-06-02 18:18:01.987229 | orchestrator | + openstack --os-cloud test server list 2025-06-02 18:18:05.848412 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:18:05.848605 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-02 18:18:05.848624 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:18:05.848635 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | auto_allocated_network=10.42.0.9, 192.168.112.109 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:18:05.848646 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | auto_allocated_network=10.42.0.54, 192.168.112.167 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:18:05.848657 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | auto_allocated_network=10.42.0.60, 192.168.112.105 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:18:05.848668 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | auto_allocated_network=10.42.0.5, 192.168.112.103 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:18:05.848679 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | auto_allocated_network=10.42.0.15, 192.168.112.108 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-02 18:18:05.848690 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-02 18:18:06.170114 | orchestrator | + openstack --os-cloud test server show test 2025-06-02 18:18:09.768022 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:09.768125 | orchestrator | | Field | Value | 2025-06-02 18:18:09.768140 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:09.768152 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:18:09.768164 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:18:09.768175 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:18:09.768186 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-02 18:18:09.768216 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:18:09.768227 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:18:09.768238 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:18:09.768249 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:18:09.768284 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:18:09.768297 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:18:09.768308 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:18:09.768319 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:18:09.768330 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:18:09.768341 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:18:09.768352 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:18:09.768369 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:13:55.000000 | 2025-06-02 18:18:09.768380 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:18:09.768391 | orchestrator | | accessIPv4 | | 2025-06-02 18:18:09.768402 | orchestrator | | accessIPv6 | | 2025-06-02 18:18:09.768413 | orchestrator | | addresses | auto_allocated_network=10.42.0.15, 192.168.112.108 | 2025-06-02 18:18:09.768434 | orchestrator | | config_drive | | 2025-06-02 18:18:09.768446 | orchestrator | | created | 2025-06-02T18:13:34Z | 2025-06-02 18:18:09.768457 | orchestrator | | description | None | 2025-06-02 18:18:09.768468 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:18:09.768479 | orchestrator | | hostId | 421c7637b5927766a92b509480751de7c442a05851bb3ca47bd9e9bd | 2025-06-02 18:18:09.768491 | orchestrator | | host_status | None | 2025-06-02 18:18:09.768508 | orchestrator | | id | 4612ae89-311d-43ce-96b5-4e8083b69da2 | 2025-06-02 18:18:09.768519 | orchestrator | | image | Cirros 0.6.2 (5c60a273-a8da-49da-838d-b0b2f5da0139) | 2025-06-02 18:18:09.768530 | orchestrator | | key_name | test | 2025-06-02 18:18:09.768544 | orchestrator | | locked | False | 2025-06-02 18:18:09.768585 | orchestrator | | locked_reason | None | 2025-06-02 18:18:09.768600 | orchestrator | | name | test | 2025-06-02 18:18:09.768628 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:18:09.768642 | orchestrator | | progress | 0 | 2025-06-02 18:18:09.768655 | orchestrator | | project_id | 79bd88faacdc4641bbb97d45a5899593 | 2025-06-02 18:18:09.768668 | orchestrator | | properties | hostname='test' | 2025-06-02 18:18:09.768681 | orchestrator | | security_groups | name='ssh' | 2025-06-02 18:18:09.768700 | orchestrator | | | name='icmp' | 2025-06-02 18:18:09.768713 | orchestrator | | server_groups | None | 2025-06-02 18:18:09.768724 | orchestrator | | status | ACTIVE | 2025-06-02 18:18:09.768735 | orchestrator | | tags | test | 2025-06-02 18:18:09.768746 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:18:09.768756 | orchestrator | | updated | 2025-06-02T18:16:42Z | 2025-06-02 18:18:09.768777 | orchestrator | | user_id | 915d083c15e042fa8dbbd0ee4db7fbad | 2025-06-02 18:18:09.768789 | orchestrator | | volumes_attached | delete_on_termination='False', id='fb91080d-2f88-413c-a835-2d7be27b68ca' | 2025-06-02 18:18:09.771760 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:10.063499 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-02 18:18:13.311517 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:13.311720 | orchestrator | | Field | Value | 2025-06-02 18:18:13.311744 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:13.311756 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:18:13.311768 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:18:13.311779 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:18:13.311790 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-02 18:18:13.311801 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:18:13.311811 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:18:13.311839 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:18:13.311851 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:18:13.311881 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:18:13.311902 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:18:13.311913 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:18:13.311924 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:18:13.311935 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:18:13.311946 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:18:13.311957 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:18:13.311967 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:14:41.000000 | 2025-06-02 18:18:13.311978 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:18:13.311995 | orchestrator | | accessIPv4 | | 2025-06-02 18:18:13.312006 | orchestrator | | accessIPv6 | | 2025-06-02 18:18:13.312018 | orchestrator | | addresses | auto_allocated_network=10.42.0.5, 192.168.112.103 | 2025-06-02 18:18:13.312042 | orchestrator | | config_drive | | 2025-06-02 18:18:13.312054 | orchestrator | | created | 2025-06-02T18:14:19Z | 2025-06-02 18:18:13.312065 | orchestrator | | description | None | 2025-06-02 18:18:13.312076 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:18:13.312087 | orchestrator | | hostId | ccc41257150bf9dd8321eaef389cfee3e1bf7e516e43da64064dd642 | 2025-06-02 18:18:13.312098 | orchestrator | | host_status | None | 2025-06-02 18:18:13.312109 | orchestrator | | id | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | 2025-06-02 18:18:13.312120 | orchestrator | | image | Cirros 0.6.2 (5c60a273-a8da-49da-838d-b0b2f5da0139) | 2025-06-02 18:18:13.312131 | orchestrator | | key_name | test | 2025-06-02 18:18:13.312147 | orchestrator | | locked | False | 2025-06-02 18:18:13.312159 | orchestrator | | locked_reason | None | 2025-06-02 18:18:13.312183 | orchestrator | | name | test-1 | 2025-06-02 18:18:13.312211 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:18:13.312230 | orchestrator | | progress | 0 | 2025-06-02 18:18:13.312249 | orchestrator | | project_id | 79bd88faacdc4641bbb97d45a5899593 | 2025-06-02 18:18:13.312266 | orchestrator | | properties | hostname='test-1' | 2025-06-02 18:18:13.312285 | orchestrator | | security_groups | name='ssh' | 2025-06-02 18:18:13.312304 | orchestrator | | | name='icmp' | 2025-06-02 18:18:13.312323 | orchestrator | | server_groups | None | 2025-06-02 18:18:13.312340 | orchestrator | | status | ACTIVE | 2025-06-02 18:18:13.312359 | orchestrator | | tags | test | 2025-06-02 18:18:13.312389 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:18:13.312408 | orchestrator | | updated | 2025-06-02T18:16:47Z | 2025-06-02 18:18:13.312435 | orchestrator | | user_id | 915d083c15e042fa8dbbd0ee4db7fbad | 2025-06-02 18:18:13.312455 | orchestrator | | volumes_attached | | 2025-06-02 18:18:13.315416 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:13.606197 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-02 18:18:16.912414 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:16.912525 | orchestrator | | Field | Value | 2025-06-02 18:18:16.912537 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:16.912546 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:18:16.912595 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:18:16.912645 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:18:16.912682 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-02 18:18:16.912691 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:18:16.912699 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:18:16.912786 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:18:16.912797 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:18:16.912824 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:18:16.912834 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:18:16.912842 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:18:16.912851 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:18:16.912860 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:18:16.912868 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:18:16.912884 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:18:16.912896 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:15:19.000000 | 2025-06-02 18:18:16.912905 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:18:16.912913 | orchestrator | | accessIPv4 | | 2025-06-02 18:18:16.912922 | orchestrator | | accessIPv6 | | 2025-06-02 18:18:16.912931 | orchestrator | | addresses | auto_allocated_network=10.42.0.60, 192.168.112.105 | 2025-06-02 18:18:16.912945 | orchestrator | | config_drive | | 2025-06-02 18:18:16.912955 | orchestrator | | created | 2025-06-02T18:14:58Z | 2025-06-02 18:18:16.912965 | orchestrator | | description | None | 2025-06-02 18:18:16.912975 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:18:16.912985 | orchestrator | | hostId | 79c1b869bfba14ad2e5ef358868dcbce00bf76b8f094418bd51c7f2a | 2025-06-02 18:18:16.912999 | orchestrator | | host_status | None | 2025-06-02 18:18:16.913009 | orchestrator | | id | 86266dc3-ede7-45da-b3c5-0df750474de5 | 2025-06-02 18:18:16.913022 | orchestrator | | image | Cirros 0.6.2 (5c60a273-a8da-49da-838d-b0b2f5da0139) | 2025-06-02 18:18:16.913031 | orchestrator | | key_name | test | 2025-06-02 18:18:16.913040 | orchestrator | | locked | False | 2025-06-02 18:18:16.913048 | orchestrator | | locked_reason | None | 2025-06-02 18:18:16.913057 | orchestrator | | name | test-2 | 2025-06-02 18:18:16.913071 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:18:16.913079 | orchestrator | | progress | 0 | 2025-06-02 18:18:16.913088 | orchestrator | | project_id | 79bd88faacdc4641bbb97d45a5899593 | 2025-06-02 18:18:16.913103 | orchestrator | | properties | hostname='test-2' | 2025-06-02 18:18:16.913112 | orchestrator | | security_groups | name='ssh' | 2025-06-02 18:18:16.913121 | orchestrator | | | name='icmp' | 2025-06-02 18:18:16.913133 | orchestrator | | server_groups | None | 2025-06-02 18:18:16.913141 | orchestrator | | status | ACTIVE | 2025-06-02 18:18:16.913150 | orchestrator | | tags | test | 2025-06-02 18:18:16.913159 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:18:16.913168 | orchestrator | | updated | 2025-06-02T18:16:52Z | 2025-06-02 18:18:16.913181 | orchestrator | | user_id | 915d083c15e042fa8dbbd0ee4db7fbad | 2025-06-02 18:18:16.913190 | orchestrator | | volumes_attached | | 2025-06-02 18:18:16.915208 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:17.221768 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-02 18:18:20.378402 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:20.378511 | orchestrator | | Field | Value | 2025-06-02 18:18:20.378527 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:20.378540 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:18:20.378612 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:18:20.378625 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:18:20.378635 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-02 18:18:20.378645 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:18:20.378655 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:18:20.378665 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:18:20.378675 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:18:20.378720 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:18:20.378732 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:18:20.378742 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:18:20.378751 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:18:20.378762 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:18:20.378772 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:18:20.378781 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:18:20.378791 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:15:52.000000 | 2025-06-02 18:18:20.378801 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:18:20.378818 | orchestrator | | accessIPv4 | | 2025-06-02 18:18:20.378828 | orchestrator | | accessIPv6 | | 2025-06-02 18:18:20.378845 | orchestrator | | addresses | auto_allocated_network=10.42.0.54, 192.168.112.167 | 2025-06-02 18:18:20.378861 | orchestrator | | config_drive | | 2025-06-02 18:18:20.378871 | orchestrator | | created | 2025-06-02T18:15:37Z | 2025-06-02 18:18:20.378881 | orchestrator | | description | None | 2025-06-02 18:18:20.378890 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:18:20.378905 | orchestrator | | hostId | ccc41257150bf9dd8321eaef389cfee3e1bf7e516e43da64064dd642 | 2025-06-02 18:18:20.378915 | orchestrator | | host_status | None | 2025-06-02 18:18:20.378924 | orchestrator | | id | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | 2025-06-02 18:18:20.378934 | orchestrator | | image | Cirros 0.6.2 (5c60a273-a8da-49da-838d-b0b2f5da0139) | 2025-06-02 18:18:20.378944 | orchestrator | | key_name | test | 2025-06-02 18:18:20.378967 | orchestrator | | locked | False | 2025-06-02 18:18:20.378977 | orchestrator | | locked_reason | None | 2025-06-02 18:18:20.378987 | orchestrator | | name | test-3 | 2025-06-02 18:18:20.379002 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:18:20.379012 | orchestrator | | progress | 0 | 2025-06-02 18:18:20.379022 | orchestrator | | project_id | 79bd88faacdc4641bbb97d45a5899593 | 2025-06-02 18:18:20.379031 | orchestrator | | properties | hostname='test-3' | 2025-06-02 18:18:20.379046 | orchestrator | | security_groups | name='ssh' | 2025-06-02 18:18:20.379056 | orchestrator | | | name='icmp' | 2025-06-02 18:18:20.379066 | orchestrator | | server_groups | None | 2025-06-02 18:18:20.379075 | orchestrator | | status | ACTIVE | 2025-06-02 18:18:20.379094 | orchestrator | | tags | test | 2025-06-02 18:18:20.379104 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:18:20.379114 | orchestrator | | updated | 2025-06-02T18:16:56Z | 2025-06-02 18:18:20.379128 | orchestrator | | user_id | 915d083c15e042fa8dbbd0ee4db7fbad | 2025-06-02 18:18:20.379139 | orchestrator | | volumes_attached | | 2025-06-02 18:18:20.381455 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:20.680920 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-02 18:18:23.915226 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:23.915356 | orchestrator | | Field | Value | 2025-06-02 18:18:23.915372 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:23.915384 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-02 18:18:23.915396 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-02 18:18:23.915427 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-02 18:18:23.915439 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-02 18:18:23.915450 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-02 18:18:23.915465 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-02 18:18:23.915484 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-02 18:18:23.915499 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-02 18:18:23.915527 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-02 18:18:23.915539 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-02 18:18:23.915607 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-02 18:18:23.915620 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-02 18:18:23.915639 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-02 18:18:23.915650 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-02 18:18:23.915661 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-02 18:18:23.915672 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-02T18:16:25.000000 | 2025-06-02 18:18:23.915683 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-02 18:18:23.915694 | orchestrator | | accessIPv4 | | 2025-06-02 18:18:23.915705 | orchestrator | | accessIPv6 | | 2025-06-02 18:18:23.915716 | orchestrator | | addresses | auto_allocated_network=10.42.0.9, 192.168.112.109 | 2025-06-02 18:18:23.915734 | orchestrator | | config_drive | | 2025-06-02 18:18:23.915745 | orchestrator | | created | 2025-06-02T18:16:09Z | 2025-06-02 18:18:23.915761 | orchestrator | | description | None | 2025-06-02 18:18:23.915779 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-02 18:18:23.915790 | orchestrator | | hostId | 421c7637b5927766a92b509480751de7c442a05851bb3ca47bd9e9bd | 2025-06-02 18:18:23.915802 | orchestrator | | host_status | None | 2025-06-02 18:18:23.915814 | orchestrator | | id | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | 2025-06-02 18:18:23.915825 | orchestrator | | image | Cirros 0.6.2 (5c60a273-a8da-49da-838d-b0b2f5da0139) | 2025-06-02 18:18:23.915836 | orchestrator | | key_name | test | 2025-06-02 18:18:23.915847 | orchestrator | | locked | False | 2025-06-02 18:18:23.915857 | orchestrator | | locked_reason | None | 2025-06-02 18:18:23.915869 | orchestrator | | name | test-4 | 2025-06-02 18:18:23.915885 | orchestrator | | pinned_availability_zone | None | 2025-06-02 18:18:23.915901 | orchestrator | | progress | 0 | 2025-06-02 18:18:23.915919 | orchestrator | | project_id | 79bd88faacdc4641bbb97d45a5899593 | 2025-06-02 18:18:23.915930 | orchestrator | | properties | hostname='test-4' | 2025-06-02 18:18:23.915941 | orchestrator | | security_groups | name='ssh' | 2025-06-02 18:18:23.915952 | orchestrator | | | name='icmp' | 2025-06-02 18:18:23.915963 | orchestrator | | server_groups | None | 2025-06-02 18:18:23.915974 | orchestrator | | status | ACTIVE | 2025-06-02 18:18:23.915985 | orchestrator | | tags | test | 2025-06-02 18:18:23.915995 | orchestrator | | trusted_image_certificates | None | 2025-06-02 18:18:23.916006 | orchestrator | | updated | 2025-06-02T18:17:01Z | 2025-06-02 18:18:23.916022 | orchestrator | | user_id | 915d083c15e042fa8dbbd0ee4db7fbad | 2025-06-02 18:18:23.916034 | orchestrator | | volumes_attached | | 2025-06-02 18:18:23.919473 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-02 18:18:24.210810 | orchestrator | + server_ping 2025-06-02 18:18:24.214396 | orchestrator | ++ tr -d '\r' 2025-06-02 18:18:24.214474 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:18:27.109966 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:18:27.110182 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 18:18:27.126261 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 18:18:27.126355 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=10.5 ms 2025-06-02 18:18:28.120976 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.80 ms 2025-06-02 18:18:29.121456 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.20 ms 2025-06-02 18:18:29.121592 | orchestrator | 2025-06-02 18:18:29.121607 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 18:18:29.121619 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:18:29.121640 | orchestrator | rtt min/avg/max/mdev = 2.204/5.169/10.508/3.782 ms 2025-06-02 18:18:29.121971 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:18:29.121991 | orchestrator | + ping -c3 192.168.112.103 2025-06-02 18:18:29.136838 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-06-02 18:18:29.136933 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=10.5 ms 2025-06-02 18:18:30.130428 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.62 ms 2025-06-02 18:18:31.131871 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=2.16 ms 2025-06-02 18:18:31.131977 | orchestrator | 2025-06-02 18:18:31.131993 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-06-02 18:18:31.132006 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:18:31.132017 | orchestrator | rtt min/avg/max/mdev = 2.160/5.091/10.492/3.823 ms 2025-06-02 18:18:31.132417 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:18:31.132462 | orchestrator | + ping -c3 192.168.112.108 2025-06-02 18:18:31.144386 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-06-02 18:18:31.144446 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=7.64 ms 2025-06-02 18:18:32.141642 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.50 ms 2025-06-02 18:18:33.143529 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.12 ms 2025-06-02 18:18:33.143697 | orchestrator | 2025-06-02 18:18:33.143713 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-06-02 18:18:33.143727 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 18:18:33.143739 | orchestrator | rtt min/avg/max/mdev = 2.124/4.089/7.641/2.516 ms 2025-06-02 18:18:33.143751 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:18:33.143763 | orchestrator | + ping -c3 192.168.112.109 2025-06-02 18:18:33.154323 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-06-02 18:18:33.154471 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.01 ms 2025-06-02 18:18:34.152682 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.89 ms 2025-06-02 18:18:35.152665 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.73 ms 2025-06-02 18:18:35.152767 | orchestrator | 2025-06-02 18:18:35.152783 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-06-02 18:18:35.152795 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:18:35.152841 | orchestrator | rtt min/avg/max/mdev = 1.734/3.545/6.013/1.807 ms 2025-06-02 18:18:35.153312 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:18:35.153339 | orchestrator | + ping -c3 192.168.112.105 2025-06-02 18:18:35.166096 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-06-02 18:18:35.166170 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.15 ms 2025-06-02 18:18:36.164058 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.98 ms 2025-06-02 18:18:37.163653 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.75 ms 2025-06-02 18:18:37.163786 | orchestrator | 2025-06-02 18:18:37.163840 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-06-02 18:18:37.163855 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:18:37.163867 | orchestrator | rtt min/avg/max/mdev = 1.752/3.627/6.149/1.852 ms 2025-06-02 18:18:37.164638 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-06-02 18:18:37.164664 | orchestrator | + compute_list 2025-06-02 18:18:37.164676 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 18:18:40.387908 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:40.388006 | orchestrator | | ID | Name | Status | 2025-06-02 18:18:40.388014 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:18:40.388020 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | 2025-06-02 18:18:40.388026 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | 2025-06-02 18:18:40.388032 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:40.685535 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 18:18:43.706328 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:43.706425 | orchestrator | | ID | Name | Status | 2025-06-02 18:18:43.706437 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:18:43.706446 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | 2025-06-02 18:18:43.706455 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:43.977778 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 18:18:47.366441 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:47.366662 | orchestrator | | ID | Name | Status | 2025-06-02 18:18:47.366694 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:18:47.366710 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | 2025-06-02 18:18:47.366721 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | 2025-06-02 18:18:47.366732 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:18:47.622700 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-4 2025-06-02 18:18:50.497099 | orchestrator | 2025-06-02 18:18:50 | INFO  | Live migrating server 86266dc3-ede7-45da-b3c5-0df750474de5 2025-06-02 18:19:03.840488 | orchestrator | 2025-06-02 18:19:03 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:06.222635 | orchestrator | 2025-06-02 18:19:06 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:08.738111 | orchestrator | 2025-06-02 18:19:08 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:11.071280 | orchestrator | 2025-06-02 18:19:11 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:13.369286 | orchestrator | 2025-06-02 18:19:13 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:15.693531 | orchestrator | 2025-06-02 18:19:15 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:17.942808 | orchestrator | 2025-06-02 18:19:17 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:19:20.245216 | orchestrator | 2025-06-02 18:19:20 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) completed with status ACTIVE 2025-06-02 18:19:20.560675 | orchestrator | + compute_list 2025-06-02 18:19:20.560800 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 18:19:23.614376 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:19:23.614494 | orchestrator | | ID | Name | Status | 2025-06-02 18:19:23.614509 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:19:23.614521 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | 2025-06-02 18:19:23.614578 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | 2025-06-02 18:19:23.614590 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | 2025-06-02 18:19:23.614602 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:19:23.877931 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 18:19:26.527128 | orchestrator | +------+--------+----------+ 2025-06-02 18:19:26.527218 | orchestrator | | ID | Name | Status | 2025-06-02 18:19:26.527230 | orchestrator | |------+--------+----------| 2025-06-02 18:19:26.527238 | orchestrator | +------+--------+----------+ 2025-06-02 18:19:26.801896 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 18:19:29.650244 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:19:29.650952 | orchestrator | | ID | Name | Status | 2025-06-02 18:19:29.650979 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:19:29.650991 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | 2025-06-02 18:19:29.651002 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | 2025-06-02 18:19:29.651014 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:19:29.934602 | orchestrator | + server_ping 2025-06-02 18:19:29.935223 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:19:29.935391 | orchestrator | ++ tr -d '\r' 2025-06-02 18:19:32.954103 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:19:32.954211 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 18:19:32.967275 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 18:19:32.967393 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=11.2 ms 2025-06-02 18:19:33.961941 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.45 ms 2025-06-02 18:19:34.961665 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.13 ms 2025-06-02 18:19:34.961815 | orchestrator | 2025-06-02 18:19:34.961846 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 18:19:34.961865 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:19:34.961937 | orchestrator | rtt min/avg/max/mdev = 2.125/5.263/11.215/4.210 ms 2025-06-02 18:19:34.961962 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:19:34.961983 | orchestrator | + ping -c3 192.168.112.103 2025-06-02 18:19:34.976695 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-06-02 18:19:34.976815 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=10.2 ms 2025-06-02 18:19:35.970776 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.93 ms 2025-06-02 18:19:36.970992 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.64 ms 2025-06-02 18:19:36.971105 | orchestrator | 2025-06-02 18:19:36.971120 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-06-02 18:19:36.971133 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:19:36.971144 | orchestrator | rtt min/avg/max/mdev = 1.640/4.907/10.155/3.747 ms 2025-06-02 18:19:36.971350 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:19:36.971401 | orchestrator | + ping -c3 192.168.112.108 2025-06-02 18:19:36.981309 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-06-02 18:19:36.981370 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=5.14 ms 2025-06-02 18:19:37.979399 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.67 ms 2025-06-02 18:19:38.979488 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.68 ms 2025-06-02 18:19:38.979673 | orchestrator | 2025-06-02 18:19:38.979694 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-06-02 18:19:38.979845 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:19:38.979858 | orchestrator | rtt min/avg/max/mdev = 1.684/3.162/5.137/1.452 ms 2025-06-02 18:19:38.979883 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:19:38.979895 | orchestrator | + ping -c3 192.168.112.109 2025-06-02 18:19:38.991228 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-06-02 18:19:38.991323 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.83 ms 2025-06-02 18:19:39.988937 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=3.23 ms 2025-06-02 18:19:40.989934 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=1.86 ms 2025-06-02 18:19:40.990090 | orchestrator | 2025-06-02 18:19:40.990105 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-06-02 18:19:40.990116 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:19:40.990125 | orchestrator | rtt min/avg/max/mdev = 1.863/3.974/6.834/2.097 ms 2025-06-02 18:19:40.990135 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:19:40.990145 | orchestrator | + ping -c3 192.168.112.105 2025-06-02 18:19:41.003612 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-06-02 18:19:41.003723 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=8.77 ms 2025-06-02 18:19:41.999791 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=3.01 ms 2025-06-02 18:19:43.002518 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.05 ms 2025-06-02 18:19:43.002667 | orchestrator | 2025-06-02 18:19:43.002683 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-06-02 18:19:43.002696 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:19:43.002707 | orchestrator | rtt min/avg/max/mdev = 2.046/4.606/8.766/2.967 ms 2025-06-02 18:19:43.002719 | orchestrator | + osism manage compute migrate --yes --target testbed-node-3 testbed-node-5 2025-06-02 18:19:46.027364 | orchestrator | 2025-06-02 18:19:46 | INFO  | Live migrating server fa51a826-4aee-4e03-9c5f-a21faaad00f1 2025-06-02 18:19:58.352461 | orchestrator | 2025-06-02 18:19:58 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:00.748978 | orchestrator | 2025-06-02 18:20:00 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:03.116763 | orchestrator | 2025-06-02 18:20:03 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:05.472959 | orchestrator | 2025-06-02 18:20:05 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:07.811160 | orchestrator | 2025-06-02 18:20:07 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:10.434618 | orchestrator | 2025-06-02 18:20:10 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:12.765027 | orchestrator | 2025-06-02 18:20:12 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:20:15.162988 | orchestrator | 2025-06-02 18:20:15 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) completed with status ACTIVE 2025-06-02 18:20:15.163114 | orchestrator | 2025-06-02 18:20:15 | INFO  | Live migrating server 9d171368-cb14-44b8-ac6d-2c157d57bf42 2025-06-02 18:20:28.524747 | orchestrator | 2025-06-02 18:20:28 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:30.874915 | orchestrator | 2025-06-02 18:20:30 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:33.221584 | orchestrator | 2025-06-02 18:20:33 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:35.506833 | orchestrator | 2025-06-02 18:20:35 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:37.816648 | orchestrator | 2025-06-02 18:20:37 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:40.175230 | orchestrator | 2025-06-02 18:20:40 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:42.524620 | orchestrator | 2025-06-02 18:20:42 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:44.891595 | orchestrator | 2025-06-02 18:20:44 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:20:47.243900 | orchestrator | 2025-06-02 18:20:47 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) completed with status ACTIVE 2025-06-02 18:20:47.504089 | orchestrator | + compute_list 2025-06-02 18:20:47.504187 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 18:20:50.730839 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:20:50.730934 | orchestrator | | ID | Name | Status | 2025-06-02 18:20:50.730946 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:20:50.730953 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | 2025-06-02 18:20:50.730959 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | 2025-06-02 18:20:50.730966 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | 2025-06-02 18:20:50.730970 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | 2025-06-02 18:20:50.730975 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | 2025-06-02 18:20:50.730979 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:20:51.019339 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 18:20:53.511142 | orchestrator | +------+--------+----------+ 2025-06-02 18:20:53.511237 | orchestrator | | ID | Name | Status | 2025-06-02 18:20:53.511248 | orchestrator | |------+--------+----------| 2025-06-02 18:20:53.511257 | orchestrator | +------+--------+----------+ 2025-06-02 18:20:53.801199 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 18:20:56.306927 | orchestrator | +------+--------+----------+ 2025-06-02 18:20:56.307068 | orchestrator | | ID | Name | Status | 2025-06-02 18:20:56.307912 | orchestrator | |------+--------+----------| 2025-06-02 18:20:56.307973 | orchestrator | +------+--------+----------+ 2025-06-02 18:20:56.576154 | orchestrator | + server_ping 2025-06-02 18:20:56.576867 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:20:56.576903 | orchestrator | ++ tr -d '\r' 2025-06-02 18:20:59.433594 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:20:59.433695 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 18:20:59.443795 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 18:20:59.443880 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=7.45 ms 2025-06-02 18:21:00.439362 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=1.84 ms 2025-06-02 18:21:01.441671 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=1.78 ms 2025-06-02 18:21:01.441799 | orchestrator | 2025-06-02 18:21:01.441816 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 18:21:01.441830 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:21:01.441886 | orchestrator | rtt min/avg/max/mdev = 1.777/3.688/7.445/2.656 ms 2025-06-02 18:21:01.442112 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:21:01.442135 | orchestrator | + ping -c3 192.168.112.103 2025-06-02 18:21:01.455096 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-06-02 18:21:01.455211 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.88 ms 2025-06-02 18:21:02.451241 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.59 ms 2025-06-02 18:21:03.452613 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.98 ms 2025-06-02 18:21:03.452818 | orchestrator | 2025-06-02 18:21:03.452851 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-06-02 18:21:03.452870 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:21:03.452881 | orchestrator | rtt min/avg/max/mdev = 1.975/4.149/7.880/2.649 ms 2025-06-02 18:21:03.452995 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:21:03.453012 | orchestrator | + ping -c3 192.168.112.108 2025-06-02 18:21:03.464523 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-06-02 18:21:03.464580 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.73 ms 2025-06-02 18:21:04.462930 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.86 ms 2025-06-02 18:21:05.463308 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.82 ms 2025-06-02 18:21:05.463391 | orchestrator | 2025-06-02 18:21:05.463402 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-06-02 18:21:05.463412 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:21:05.463420 | orchestrator | rtt min/avg/max/mdev = 1.820/3.802/6.731/2.113 ms 2025-06-02 18:21:05.463429 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:21:05.463437 | orchestrator | + ping -c3 192.168.112.109 2025-06-02 18:21:05.476035 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-06-02 18:21:05.476101 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.70 ms 2025-06-02 18:21:06.473992 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.63 ms 2025-06-02 18:21:07.474598 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.04 ms 2025-06-02 18:21:07.474714 | orchestrator | 2025-06-02 18:21:07.474731 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-06-02 18:21:07.474744 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:21:07.474755 | orchestrator | rtt min/avg/max/mdev = 2.040/3.787/6.698/2.071 ms 2025-06-02 18:21:07.475074 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:21:07.475099 | orchestrator | + ping -c3 192.168.112.105 2025-06-02 18:21:07.484685 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-06-02 18:21:07.484769 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.37 ms 2025-06-02 18:21:08.482869 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.71 ms 2025-06-02 18:21:09.484776 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=1.92 ms 2025-06-02 18:21:09.484881 | orchestrator | 2025-06-02 18:21:09.484897 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-06-02 18:21:09.484911 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 18:21:09.484922 | orchestrator | rtt min/avg/max/mdev = 1.923/3.667/6.372/1.938 ms 2025-06-02 18:21:09.484933 | orchestrator | + osism manage compute migrate --yes --target testbed-node-4 testbed-node-3 2025-06-02 18:21:12.651614 | orchestrator | 2025-06-02 18:21:12 | INFO  | Live migrating server 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd 2025-06-02 18:21:25.403654 | orchestrator | 2025-06-02 18:21:25 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:27.733250 | orchestrator | 2025-06-02 18:21:27 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:30.043714 | orchestrator | 2025-06-02 18:21:30 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:32.424067 | orchestrator | 2025-06-02 18:21:32 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:34.740456 | orchestrator | 2025-06-02 18:21:34 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:37.002753 | orchestrator | 2025-06-02 18:21:37 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:39.305033 | orchestrator | 2025-06-02 18:21:39 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:41.671739 | orchestrator | 2025-06-02 18:21:41 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:21:44.043084 | orchestrator | 2025-06-02 18:21:44 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) completed with status ACTIVE 2025-06-02 18:21:44.043164 | orchestrator | 2025-06-02 18:21:44 | INFO  | Live migrating server fa51a826-4aee-4e03-9c5f-a21faaad00f1 2025-06-02 18:21:56.402893 | orchestrator | 2025-06-02 18:21:56 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:21:58.758744 | orchestrator | 2025-06-02 18:21:58 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:01.152189 | orchestrator | 2025-06-02 18:22:01 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:03.429357 | orchestrator | 2025-06-02 18:22:03 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:05.730547 | orchestrator | 2025-06-02 18:22:05 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:08.060279 | orchestrator | 2025-06-02 18:22:08 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:10.454657 | orchestrator | 2025-06-02 18:22:10 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:22:12.793761 | orchestrator | 2025-06-02 18:22:12 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) completed with status ACTIVE 2025-06-02 18:22:12.793861 | orchestrator | 2025-06-02 18:22:12 | INFO  | Live migrating server 86266dc3-ede7-45da-b3c5-0df750474de5 2025-06-02 18:22:23.482437 | orchestrator | 2025-06-02 18:22:23 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:25.866904 | orchestrator | 2025-06-02 18:22:25 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:28.225036 | orchestrator | 2025-06-02 18:22:28 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:30.491349 | orchestrator | 2025-06-02 18:22:30 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:32.780067 | orchestrator | 2025-06-02 18:22:32 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:35.149152 | orchestrator | 2025-06-02 18:22:35 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:37.538993 | orchestrator | 2025-06-02 18:22:37 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:39.895685 | orchestrator | 2025-06-02 18:22:39 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:22:42.167945 | orchestrator | 2025-06-02 18:22:42 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) completed with status ACTIVE 2025-06-02 18:22:42.168055 | orchestrator | 2025-06-02 18:22:42 | INFO  | Live migrating server 9d171368-cb14-44b8-ac6d-2c157d57bf42 2025-06-02 18:22:52.677823 | orchestrator | 2025-06-02 18:22:52 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:22:55.050754 | orchestrator | 2025-06-02 18:22:55 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:22:57.399847 | orchestrator | 2025-06-02 18:22:57 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:22:59.664233 | orchestrator | 2025-06-02 18:22:59 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:23:02.128831 | orchestrator | 2025-06-02 18:23:02 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:23:04.428660 | orchestrator | 2025-06-02 18:23:04 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:23:06.775887 | orchestrator | 2025-06-02 18:23:06 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:23:09.135403 | orchestrator | 2025-06-02 18:23:09 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) completed with status ACTIVE 2025-06-02 18:23:09.135518 | orchestrator | 2025-06-02 18:23:09 | INFO  | Live migrating server 4612ae89-311d-43ce-96b5-4e8083b69da2 2025-06-02 18:23:19.282620 | orchestrator | 2025-06-02 18:23:19 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:21.643990 | orchestrator | 2025-06-02 18:23:21 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:24.008333 | orchestrator | 2025-06-02 18:23:24 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:26.418934 | orchestrator | 2025-06-02 18:23:26 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:28.670732 | orchestrator | 2025-06-02 18:23:28 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:30.974102 | orchestrator | 2025-06-02 18:23:30 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:33.383960 | orchestrator | 2025-06-02 18:23:33 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:35.752717 | orchestrator | 2025-06-02 18:23:35 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:38.109947 | orchestrator | 2025-06-02 18:23:38 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:40.398732 | orchestrator | 2025-06-02 18:23:40 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:23:42.787530 | orchestrator | 2025-06-02 18:23:42 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) completed with status ACTIVE 2025-06-02 18:23:43.107694 | orchestrator | + compute_list 2025-06-02 18:23:43.107782 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 18:23:45.938505 | orchestrator | +------+--------+----------+ 2025-06-02 18:23:45.938614 | orchestrator | | ID | Name | Status | 2025-06-02 18:23:45.938650 | orchestrator | |------+--------+----------| 2025-06-02 18:23:45.938674 | orchestrator | +------+--------+----------+ 2025-06-02 18:23:46.233326 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 18:23:49.499468 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:23:49.499575 | orchestrator | | ID | Name | Status | 2025-06-02 18:23:49.499590 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:23:49.499603 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | 2025-06-02 18:23:49.499615 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | 2025-06-02 18:23:49.499627 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | 2025-06-02 18:23:49.499637 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | 2025-06-02 18:23:49.499649 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | 2025-06-02 18:23:49.499660 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:23:49.794398 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 18:23:52.383137 | orchestrator | +------+--------+----------+ 2025-06-02 18:23:52.383251 | orchestrator | | ID | Name | Status | 2025-06-02 18:23:52.383265 | orchestrator | |------+--------+----------| 2025-06-02 18:23:52.383276 | orchestrator | +------+--------+----------+ 2025-06-02 18:23:52.637832 | orchestrator | + server_ping 2025-06-02 18:23:52.639471 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:23:52.639560 | orchestrator | ++ tr -d '\r' 2025-06-02 18:23:56.006967 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:23:56.007071 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 18:23:56.016769 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 18:23:56.016865 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=7.80 ms 2025-06-02 18:23:57.013388 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.21 ms 2025-06-02 18:23:58.015104 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.63 ms 2025-06-02 18:23:58.015211 | orchestrator | 2025-06-02 18:23:58.015228 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 18:23:58.015241 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:23:58.015252 | orchestrator | rtt min/avg/max/mdev = 2.213/4.212/7.799/2.541 ms 2025-06-02 18:23:58.015264 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:23:58.015276 | orchestrator | + ping -c3 192.168.112.103 2025-06-02 18:23:58.025742 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-06-02 18:23:58.025822 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=7.43 ms 2025-06-02 18:23:59.023129 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.81 ms 2025-06-02 18:24:00.024108 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.90 ms 2025-06-02 18:24:00.024227 | orchestrator | 2025-06-02 18:24:00.024253 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-06-02 18:24:00.024272 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:24:00.024288 | orchestrator | rtt min/avg/max/mdev = 1.901/4.047/7.427/2.418 ms 2025-06-02 18:24:00.024306 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:24:00.024323 | orchestrator | + ping -c3 192.168.112.108 2025-06-02 18:24:00.037615 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-06-02 18:24:00.037704 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.38 ms 2025-06-02 18:24:01.034117 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=3.11 ms 2025-06-02 18:24:02.033375 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.69 ms 2025-06-02 18:24:02.033453 | orchestrator | 2025-06-02 18:24:02.033460 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-06-02 18:24:02.033465 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:24:02.033470 | orchestrator | rtt min/avg/max/mdev = 1.685/4.391/8.382/2.881 ms 2025-06-02 18:24:02.034654 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:24:02.034769 | orchestrator | + ping -c3 192.168.112.109 2025-06-02 18:24:02.045510 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-06-02 18:24:02.045587 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.59 ms 2025-06-02 18:24:03.043470 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.24 ms 2025-06-02 18:24:04.044750 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.11 ms 2025-06-02 18:24:04.044864 | orchestrator | 2025-06-02 18:24:04.044882 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-06-02 18:24:04.044895 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:24:04.044908 | orchestrator | rtt min/avg/max/mdev = 2.108/3.647/6.593/2.083 ms 2025-06-02 18:24:04.045467 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:24:04.045495 | orchestrator | + ping -c3 192.168.112.105 2025-06-02 18:24:04.055115 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-06-02 18:24:04.055198 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=5.84 ms 2025-06-02 18:24:05.053130 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.75 ms 2025-06-02 18:24:06.053825 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.01 ms 2025-06-02 18:24:06.053930 | orchestrator | 2025-06-02 18:24:06.053947 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-06-02 18:24:06.053959 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-02 18:24:06.053971 | orchestrator | rtt min/avg/max/mdev = 2.006/3.531/5.843/1.662 ms 2025-06-02 18:24:06.054580 | orchestrator | + osism manage compute migrate --yes --target testbed-node-5 testbed-node-4 2025-06-02 18:24:09.125878 | orchestrator | 2025-06-02 18:24:09 | INFO  | Live migrating server 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd 2025-06-02 18:24:20.064900 | orchestrator | 2025-06-02 18:24:20 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:22.515365 | orchestrator | 2025-06-02 18:24:22 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:24.876282 | orchestrator | 2025-06-02 18:24:24 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:27.167639 | orchestrator | 2025-06-02 18:24:27 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:29.431164 | orchestrator | 2025-06-02 18:24:29 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:31.715379 | orchestrator | 2025-06-02 18:24:31 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:34.270436 | orchestrator | 2025-06-02 18:24:34 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) is still in progress 2025-06-02 18:24:36.576980 | orchestrator | 2025-06-02 18:24:36 | INFO  | Live migration of 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd (test-4) completed with status ACTIVE 2025-06-02 18:24:36.577087 | orchestrator | 2025-06-02 18:24:36 | INFO  | Live migrating server fa51a826-4aee-4e03-9c5f-a21faaad00f1 2025-06-02 18:24:46.784641 | orchestrator | 2025-06-02 18:24:46 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:24:49.316587 | orchestrator | 2025-06-02 18:24:49 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:24:51.821586 | orchestrator | 2025-06-02 18:24:51 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:24:54.176877 | orchestrator | 2025-06-02 18:24:54 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:24:56.464324 | orchestrator | 2025-06-02 18:24:56 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:24:58.719255 | orchestrator | 2025-06-02 18:24:58 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:25:01.221875 | orchestrator | 2025-06-02 18:25:01 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:25:03.535716 | orchestrator | 2025-06-02 18:25:03 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) is still in progress 2025-06-02 18:25:06.054829 | orchestrator | 2025-06-02 18:25:06 | INFO  | Live migration of fa51a826-4aee-4e03-9c5f-a21faaad00f1 (test-3) completed with status ACTIVE 2025-06-02 18:25:06.054962 | orchestrator | 2025-06-02 18:25:06 | INFO  | Live migrating server 86266dc3-ede7-45da-b3c5-0df750474de5 2025-06-02 18:25:16.380181 | orchestrator | 2025-06-02 18:25:16 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:18.758260 | orchestrator | 2025-06-02 18:25:18 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:21.122284 | orchestrator | 2025-06-02 18:25:21 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:23.429339 | orchestrator | 2025-06-02 18:25:23 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:25.688481 | orchestrator | 2025-06-02 18:25:25 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:27.966861 | orchestrator | 2025-06-02 18:25:27 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:30.246819 | orchestrator | 2025-06-02 18:25:30 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) is still in progress 2025-06-02 18:25:32.635211 | orchestrator | 2025-06-02 18:25:32 | INFO  | Live migration of 86266dc3-ede7-45da-b3c5-0df750474de5 (test-2) completed with status ACTIVE 2025-06-02 18:25:32.635320 | orchestrator | 2025-06-02 18:25:32 | INFO  | Live migrating server 9d171368-cb14-44b8-ac6d-2c157d57bf42 2025-06-02 18:25:42.874428 | orchestrator | 2025-06-02 18:25:42 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:45.230240 | orchestrator | 2025-06-02 18:25:45 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:47.571504 | orchestrator | 2025-06-02 18:25:47 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:49.948554 | orchestrator | 2025-06-02 18:25:49 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:52.357159 | orchestrator | 2025-06-02 18:25:52 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:54.738652 | orchestrator | 2025-06-02 18:25:54 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:57.109149 | orchestrator | 2025-06-02 18:25:57 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) is still in progress 2025-06-02 18:25:59.470692 | orchestrator | 2025-06-02 18:25:59 | INFO  | Live migration of 9d171368-cb14-44b8-ac6d-2c157d57bf42 (test-1) completed with status ACTIVE 2025-06-02 18:25:59.470844 | orchestrator | 2025-06-02 18:25:59 | INFO  | Live migrating server 4612ae89-311d-43ce-96b5-4e8083b69da2 2025-06-02 18:26:09.740175 | orchestrator | 2025-06-02 18:26:09 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:12.105468 | orchestrator | 2025-06-02 18:26:12 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:14.500311 | orchestrator | 2025-06-02 18:26:14 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:16.906746 | orchestrator | 2025-06-02 18:26:16 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:19.190130 | orchestrator | 2025-06-02 18:26:19 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:21.498125 | orchestrator | 2025-06-02 18:26:21 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:23.840377 | orchestrator | 2025-06-02 18:26:23 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:26.152455 | orchestrator | 2025-06-02 18:26:26 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:28.471970 | orchestrator | 2025-06-02 18:26:28 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) is still in progress 2025-06-02 18:26:30.836931 | orchestrator | 2025-06-02 18:26:30 | INFO  | Live migration of 4612ae89-311d-43ce-96b5-4e8083b69da2 (test) completed with status ACTIVE 2025-06-02 18:26:31.115456 | orchestrator | + compute_list 2025-06-02 18:26:31.115556 | orchestrator | + osism manage compute list testbed-node-3 2025-06-02 18:26:33.650750 | orchestrator | +------+--------+----------+ 2025-06-02 18:26:33.650833 | orchestrator | | ID | Name | Status | 2025-06-02 18:26:33.650840 | orchestrator | |------+--------+----------| 2025-06-02 18:26:33.650846 | orchestrator | +------+--------+----------+ 2025-06-02 18:26:33.988316 | orchestrator | + osism manage compute list testbed-node-4 2025-06-02 18:26:36.616279 | orchestrator | +------+--------+----------+ 2025-06-02 18:26:36.616388 | orchestrator | | ID | Name | Status | 2025-06-02 18:26:36.616403 | orchestrator | |------+--------+----------| 2025-06-02 18:26:36.616415 | orchestrator | +------+--------+----------+ 2025-06-02 18:26:36.882102 | orchestrator | + osism manage compute list testbed-node-5 2025-06-02 18:26:39.982857 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:26:39.982995 | orchestrator | | ID | Name | Status | 2025-06-02 18:26:39.983012 | orchestrator | |--------------------------------------+--------+----------| 2025-06-02 18:26:39.983023 | orchestrator | | 2dea3f5a-c94d-4bc8-b886-8ae09b2a0cbd | test-4 | ACTIVE | 2025-06-02 18:26:39.983034 | orchestrator | | fa51a826-4aee-4e03-9c5f-a21faaad00f1 | test-3 | ACTIVE | 2025-06-02 18:26:39.983045 | orchestrator | | 86266dc3-ede7-45da-b3c5-0df750474de5 | test-2 | ACTIVE | 2025-06-02 18:26:39.983056 | orchestrator | | 9d171368-cb14-44b8-ac6d-2c157d57bf42 | test-1 | ACTIVE | 2025-06-02 18:26:39.983067 | orchestrator | | 4612ae89-311d-43ce-96b5-4e8083b69da2 | test | ACTIVE | 2025-06-02 18:26:39.983078 | orchestrator | +--------------------------------------+--------+----------+ 2025-06-02 18:26:40.248562 | orchestrator | + server_ping 2025-06-02 18:26:40.249622 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-02 18:26:40.249659 | orchestrator | ++ tr -d '\r' 2025-06-02 18:26:43.194860 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:26:43.195026 | orchestrator | + ping -c3 192.168.112.167 2025-06-02 18:26:43.209276 | orchestrator | PING 192.168.112.167 (192.168.112.167) 56(84) bytes of data. 2025-06-02 18:26:43.209369 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=1 ttl=63 time=11.3 ms 2025-06-02 18:26:44.202608 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=2 ttl=63 time=2.95 ms 2025-06-02 18:26:45.203547 | orchestrator | 64 bytes from 192.168.112.167: icmp_seq=3 ttl=63 time=2.06 ms 2025-06-02 18:26:45.203683 | orchestrator | 2025-06-02 18:26:45.203709 | orchestrator | --- 192.168.112.167 ping statistics --- 2025-06-02 18:26:45.203765 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:26:45.203786 | orchestrator | rtt min/avg/max/mdev = 2.057/5.434/11.300/4.163 ms 2025-06-02 18:26:45.203855 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:26:45.203872 | orchestrator | + ping -c3 192.168.112.103 2025-06-02 18:26:45.218576 | orchestrator | PING 192.168.112.103 (192.168.112.103) 56(84) bytes of data. 2025-06-02 18:26:45.218662 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=1 ttl=63 time=9.69 ms 2025-06-02 18:26:46.213438 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=2 ttl=63 time=2.60 ms 2025-06-02 18:26:47.214636 | orchestrator | 64 bytes from 192.168.112.103: icmp_seq=3 ttl=63 time=1.52 ms 2025-06-02 18:26:47.214710 | orchestrator | 2025-06-02 18:26:47.214716 | orchestrator | --- 192.168.112.103 ping statistics --- 2025-06-02 18:26:47.214723 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-02 18:26:47.214728 | orchestrator | rtt min/avg/max/mdev = 1.515/4.599/9.689/3.625 ms 2025-06-02 18:26:47.214749 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:26:47.214778 | orchestrator | + ping -c3 192.168.112.108 2025-06-02 18:26:47.222208 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-06-02 18:26:47.222259 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=4.14 ms 2025-06-02 18:26:48.222426 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.70 ms 2025-06-02 18:26:49.225350 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=3.40 ms 2025-06-02 18:26:49.225485 | orchestrator | 2025-06-02 18:26:49.225515 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-06-02 18:26:49.225535 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:26:49.225553 | orchestrator | rtt min/avg/max/mdev = 2.695/3.412/4.144/0.591 ms 2025-06-02 18:26:49.226889 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:26:49.227010 | orchestrator | + ping -c3 192.168.112.109 2025-06-02 18:26:49.238426 | orchestrator | PING 192.168.112.109 (192.168.112.109) 56(84) bytes of data. 2025-06-02 18:26:49.238518 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=1 ttl=63 time=6.84 ms 2025-06-02 18:26:50.235199 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=2 ttl=63 time=2.70 ms 2025-06-02 18:26:51.236835 | orchestrator | 64 bytes from 192.168.112.109: icmp_seq=3 ttl=63 time=2.02 ms 2025-06-02 18:26:51.237113 | orchestrator | 2025-06-02 18:26:51.237144 | orchestrator | --- 192.168.112.109 ping statistics --- 2025-06-02 18:26:51.237157 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:26:51.237168 | orchestrator | rtt min/avg/max/mdev = 2.018/3.852/6.841/2.131 ms 2025-06-02 18:26:51.237547 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-02 18:26:51.237571 | orchestrator | + ping -c3 192.168.112.105 2025-06-02 18:26:51.248768 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-06-02 18:26:51.248847 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.24 ms 2025-06-02 18:26:52.246682 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.49 ms 2025-06-02 18:26:53.247736 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.02 ms 2025-06-02 18:26:53.248755 | orchestrator | 2025-06-02 18:26:53.248811 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-06-02 18:26:53.248828 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-02 18:26:53.248840 | orchestrator | rtt min/avg/max/mdev = 2.023/3.583/6.237/1.885 ms 2025-06-02 18:26:53.466625 | orchestrator | ok: Runtime: 0:20:36.001711 2025-06-02 18:26:53.528149 | 2025-06-02 18:26:53.528341 | TASK [Run tempest] 2025-06-02 18:26:54.068713 | orchestrator | skipping: Conditional result was False 2025-06-02 18:26:54.090366 | 2025-06-02 18:26:54.090541 | TASK [Check prometheus alert status] 2025-06-02 18:26:54.631941 | orchestrator | skipping: Conditional result was False 2025-06-02 18:26:54.635545 | 2025-06-02 18:26:54.635765 | PLAY RECAP 2025-06-02 18:26:54.635936 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-02 18:26:54.636076 | 2025-06-02 18:26:54.875614 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-02 18:26:54.879526 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 18:26:55.676971 | 2025-06-02 18:26:55.677165 | PLAY [Post output play] 2025-06-02 18:26:55.695293 | 2025-06-02 18:26:55.695433 | LOOP [stage-output : Register sources] 2025-06-02 18:26:55.761889 | 2025-06-02 18:26:55.762199 | TASK [stage-output : Check sudo] 2025-06-02 18:26:56.634415 | orchestrator | sudo: a password is required 2025-06-02 18:26:56.800312 | orchestrator | ok: Runtime: 0:00:00.008701 2025-06-02 18:26:56.815112 | 2025-06-02 18:26:56.815271 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-02 18:26:56.856107 | 2025-06-02 18:26:56.856392 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-02 18:26:56.936439 | orchestrator | ok 2025-06-02 18:26:56.944370 | 2025-06-02 18:26:56.944513 | LOOP [stage-output : Ensure target folders exist] 2025-06-02 18:26:57.416737 | orchestrator | ok: "docs" 2025-06-02 18:26:57.417127 | 2025-06-02 18:26:57.691044 | orchestrator | ok: "artifacts" 2025-06-02 18:26:57.945727 | orchestrator | ok: "logs" 2025-06-02 18:26:57.968832 | 2025-06-02 18:26:57.969023 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-02 18:26:58.007127 | 2025-06-02 18:26:58.007437 | TASK [stage-output : Make all log files readable] 2025-06-02 18:26:58.291462 | orchestrator | ok 2025-06-02 18:26:58.300189 | 2025-06-02 18:26:58.300325 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-02 18:26:58.337110 | orchestrator | skipping: Conditional result was False 2025-06-02 18:26:58.353437 | 2025-06-02 18:26:58.353587 | TASK [stage-output : Discover log files for compression] 2025-06-02 18:26:58.390348 | orchestrator | skipping: Conditional result was False 2025-06-02 18:26:58.405740 | 2025-06-02 18:26:58.405912 | LOOP [stage-output : Archive everything from logs] 2025-06-02 18:26:58.458015 | 2025-06-02 18:26:58.458255 | PLAY [Post cleanup play] 2025-06-02 18:26:58.468996 | 2025-06-02 18:26:58.469142 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 18:26:58.522175 | orchestrator | ok 2025-06-02 18:26:58.536154 | 2025-06-02 18:26:58.536320 | TASK [Set cloud fact (local deployment)] 2025-06-02 18:26:58.571062 | orchestrator | skipping: Conditional result was False 2025-06-02 18:26:58.589993 | 2025-06-02 18:26:58.590147 | TASK [Clean the cloud environment] 2025-06-02 18:26:59.409734 | orchestrator | 2025-06-02 18:26:59 - clean up servers 2025-06-02 18:27:00.135811 | orchestrator | 2025-06-02 18:27:00 - testbed-manager 2025-06-02 18:27:00.220419 | orchestrator | 2025-06-02 18:27:00 - testbed-node-2 2025-06-02 18:27:00.312599 | orchestrator | 2025-06-02 18:27:00 - testbed-node-4 2025-06-02 18:27:00.407923 | orchestrator | 2025-06-02 18:27:00 - testbed-node-3 2025-06-02 18:27:00.509812 | orchestrator | 2025-06-02 18:27:00 - testbed-node-0 2025-06-02 18:27:00.613074 | orchestrator | 2025-06-02 18:27:00 - testbed-node-5 2025-06-02 18:27:00.710314 | orchestrator | 2025-06-02 18:27:00 - testbed-node-1 2025-06-02 18:27:00.788294 | orchestrator | 2025-06-02 18:27:00 - clean up keypairs 2025-06-02 18:27:00.811548 | orchestrator | 2025-06-02 18:27:00 - testbed 2025-06-02 18:27:00.836226 | orchestrator | 2025-06-02 18:27:00 - wait for servers to be gone 2025-06-02 18:27:11.656794 | orchestrator | 2025-06-02 18:27:11 - clean up ports 2025-06-02 18:27:11.841194 | orchestrator | 2025-06-02 18:27:11 - 0cca1f76-0ae2-4eb8-b605-b312ad958570 2025-06-02 18:27:12.084715 | orchestrator | 2025-06-02 18:27:12 - 0d8fc326-6596-4617-a8e8-eebd7cb3377a 2025-06-02 18:27:12.386653 | orchestrator | 2025-06-02 18:27:12 - 1834ff6c-b46c-4bd1-a7ec-2529f59aa6e4 2025-06-02 18:27:12.596608 | orchestrator | 2025-06-02 18:27:12 - 3be1942e-6dcd-4957-affe-a4ac869a5626 2025-06-02 18:27:12.812839 | orchestrator | 2025-06-02 18:27:12 - 7601aca0-2d62-4125-a775-04beeeec6bd6 2025-06-02 18:27:13.213811 | orchestrator | 2025-06-02 18:27:13 - 98950d27-ac12-452d-ac45-27e5ba8859b8 2025-06-02 18:27:13.417584 | orchestrator | 2025-06-02 18:27:13 - d5cbd31d-7bcb-4d74-bc67-c1fba69aad74 2025-06-02 18:27:13.644138 | orchestrator | 2025-06-02 18:27:13 - clean up volumes 2025-06-02 18:27:13.764484 | orchestrator | 2025-06-02 18:27:13 - testbed-volume-5-node-base 2025-06-02 18:27:13.930428 | orchestrator | 2025-06-02 18:27:13 - testbed-volume-0-node-base 2025-06-02 18:27:13.971789 | orchestrator | 2025-06-02 18:27:13 - testbed-volume-3-node-base 2025-06-02 18:27:14.012233 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-2-node-base 2025-06-02 18:27:14.051916 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-4-node-base 2025-06-02 18:27:14.092716 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-1-node-base 2025-06-02 18:27:14.132435 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-manager-base 2025-06-02 18:27:14.175848 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-0-node-3 2025-06-02 18:27:14.217310 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-3-node-3 2025-06-02 18:27:14.263295 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-4-node-4 2025-06-02 18:27:14.305512 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-6-node-3 2025-06-02 18:27:14.346226 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-1-node-4 2025-06-02 18:27:14.388321 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-8-node-5 2025-06-02 18:27:14.432695 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-2-node-5 2025-06-02 18:27:14.477844 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-5-node-5 2025-06-02 18:27:14.519613 | orchestrator | 2025-06-02 18:27:14 - testbed-volume-7-node-4 2025-06-02 18:27:14.561229 | orchestrator | 2025-06-02 18:27:14 - disconnect routers 2025-06-02 18:27:14.687729 | orchestrator | 2025-06-02 18:27:14 - testbed 2025-06-02 18:27:15.585491 | orchestrator | 2025-06-02 18:27:15 - clean up subnets 2025-06-02 18:27:15.633889 | orchestrator | 2025-06-02 18:27:15 - subnet-testbed-management 2025-06-02 18:27:15.797235 | orchestrator | 2025-06-02 18:27:15 - clean up networks 2025-06-02 18:27:15.971832 | orchestrator | 2025-06-02 18:27:15 - net-testbed-management 2025-06-02 18:27:16.246640 | orchestrator | 2025-06-02 18:27:16 - clean up security groups 2025-06-02 18:27:16.292898 | orchestrator | 2025-06-02 18:27:16 - testbed-management 2025-06-02 18:27:16.438958 | orchestrator | 2025-06-02 18:27:16 - testbed-node 2025-06-02 18:27:16.563874 | orchestrator | 2025-06-02 18:27:16 - clean up floating ips 2025-06-02 18:27:16.596471 | orchestrator | 2025-06-02 18:27:16 - 81.163.192.65 2025-06-02 18:27:16.982304 | orchestrator | 2025-06-02 18:27:16 - clean up routers 2025-06-02 18:27:17.097937 | orchestrator | 2025-06-02 18:27:17 - testbed 2025-06-02 18:27:18.659554 | orchestrator | ok: Runtime: 0:00:19.494653 2025-06-02 18:27:18.664338 | 2025-06-02 18:27:18.664555 | PLAY RECAP 2025-06-02 18:27:18.664859 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-02 18:27:18.664918 | 2025-06-02 18:27:18.841684 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-02 18:27:18.842679 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 18:27:19.572940 | 2025-06-02 18:27:19.573112 | PLAY [Cleanup play] 2025-06-02 18:27:19.589298 | 2025-06-02 18:27:19.589449 | TASK [Set cloud fact (Zuul deployment)] 2025-06-02 18:27:19.650517 | orchestrator | ok 2025-06-02 18:27:19.661346 | 2025-06-02 18:27:19.661528 | TASK [Set cloud fact (local deployment)] 2025-06-02 18:27:19.686443 | orchestrator | skipping: Conditional result was False 2025-06-02 18:27:19.694826 | 2025-06-02 18:27:19.694979 | TASK [Clean the cloud environment] 2025-06-02 18:27:20.833610 | orchestrator | 2025-06-02 18:27:20 - clean up servers 2025-06-02 18:27:21.304677 | orchestrator | 2025-06-02 18:27:21 - clean up keypairs 2025-06-02 18:27:21.324242 | orchestrator | 2025-06-02 18:27:21 - wait for servers to be gone 2025-06-02 18:27:21.372855 | orchestrator | 2025-06-02 18:27:21 - clean up ports 2025-06-02 18:27:21.445023 | orchestrator | 2025-06-02 18:27:21 - clean up volumes 2025-06-02 18:27:21.519033 | orchestrator | 2025-06-02 18:27:21 - disconnect routers 2025-06-02 18:27:21.541162 | orchestrator | 2025-06-02 18:27:21 - clean up subnets 2025-06-02 18:27:21.559900 | orchestrator | 2025-06-02 18:27:21 - clean up networks 2025-06-02 18:27:21.679464 | orchestrator | 2025-06-02 18:27:21 - clean up security groups 2025-06-02 18:27:21.717184 | orchestrator | 2025-06-02 18:27:21 - clean up floating ips 2025-06-02 18:27:21.741300 | orchestrator | 2025-06-02 18:27:21 - clean up routers 2025-06-02 18:27:22.230477 | orchestrator | ok: Runtime: 0:00:01.307832 2025-06-02 18:27:22.232146 | 2025-06-02 18:27:22.232223 | PLAY RECAP 2025-06-02 18:27:22.232274 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-02 18:27:22.232297 | 2025-06-02 18:27:22.356012 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-02 18:27:22.356978 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 18:27:23.095564 | 2025-06-02 18:27:23.095739 | PLAY [Base post-fetch] 2025-06-02 18:27:23.112032 | 2025-06-02 18:27:23.112183 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-02 18:27:23.177459 | orchestrator | skipping: Conditional result was False 2025-06-02 18:27:23.184978 | 2025-06-02 18:27:23.185139 | TASK [fetch-output : Set log path for single node] 2025-06-02 18:27:23.241290 | orchestrator | ok 2025-06-02 18:27:23.250212 | 2025-06-02 18:27:23.250361 | LOOP [fetch-output : Ensure local output dirs] 2025-06-02 18:27:23.775205 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/logs" 2025-06-02 18:27:24.088611 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/artifacts" 2025-06-02 18:27:24.357868 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/c5c8a8042d63426182240941ef017861/work/docs" 2025-06-02 18:27:24.378493 | 2025-06-02 18:27:24.378651 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-02 18:27:25.295602 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:27:25.296064 | orchestrator | changed: All items complete 2025-06-02 18:27:25.296138 | 2025-06-02 18:27:26.026693 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:27:26.785526 | orchestrator | changed: .d..t...... ./ 2025-06-02 18:27:26.818875 | 2025-06-02 18:27:26.819036 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-02 18:27:26.856981 | orchestrator | skipping: Conditional result was False 2025-06-02 18:27:26.859594 | orchestrator | skipping: Conditional result was False 2025-06-02 18:27:26.885529 | 2025-06-02 18:27:26.885747 | PLAY RECAP 2025-06-02 18:27:26.885938 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-02 18:27:26.886020 | 2025-06-02 18:27:27.024841 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-02 18:27:27.025823 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 18:27:27.788228 | 2025-06-02 18:27:27.788416 | PLAY [Base post] 2025-06-02 18:27:27.803534 | 2025-06-02 18:27:27.803686 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-02 18:27:28.788219 | orchestrator | changed 2025-06-02 18:27:28.799001 | 2025-06-02 18:27:28.799134 | PLAY RECAP 2025-06-02 18:27:28.799231 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-02 18:27:28.799330 | 2025-06-02 18:27:28.927439 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-02 18:27:28.929692 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-02 18:27:29.728154 | 2025-06-02 18:27:29.728342 | PLAY [Base post-logs] 2025-06-02 18:27:29.739985 | 2025-06-02 18:27:29.740167 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-02 18:27:30.196222 | localhost | changed 2025-06-02 18:27:30.213106 | 2025-06-02 18:27:30.213305 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-02 18:27:30.252843 | localhost | ok 2025-06-02 18:27:30.260093 | 2025-06-02 18:27:30.260272 | TASK [Set zuul-log-path fact] 2025-06-02 18:27:30.280132 | localhost | ok 2025-06-02 18:27:30.297348 | 2025-06-02 18:27:30.297517 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-02 18:27:30.327159 | localhost | ok 2025-06-02 18:27:30.333553 | 2025-06-02 18:27:30.333712 | TASK [upload-logs : Create log directories] 2025-06-02 18:27:30.861470 | localhost | changed 2025-06-02 18:27:30.865006 | 2025-06-02 18:27:30.865119 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-02 18:27:31.396407 | localhost -> localhost | ok: Runtime: 0:00:00.006898 2025-06-02 18:27:31.405239 | 2025-06-02 18:27:31.405433 | TASK [upload-logs : Upload logs to log server] 2025-06-02 18:27:31.991550 | localhost | Output suppressed because no_log was given 2025-06-02 18:27:31.994973 | 2025-06-02 18:27:31.995141 | LOOP [upload-logs : Compress console log and json output] 2025-06-02 18:27:32.056028 | localhost | skipping: Conditional result was False 2025-06-02 18:27:32.060707 | localhost | skipping: Conditional result was False 2025-06-02 18:27:32.068478 | 2025-06-02 18:27:32.069880 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-02 18:27:32.120486 | localhost | skipping: Conditional result was False 2025-06-02 18:27:32.121197 | 2025-06-02 18:27:32.124965 | localhost | skipping: Conditional result was False 2025-06-02 18:27:32.133032 | 2025-06-02 18:27:32.133268 | LOOP [upload-logs : Upload console log and json output]